ChatGPT launched on November 30th. In three weeks it became the fastest-growing consumer application in history. For developers, the implications go beyond the product itself.
What ChatGPT does differently
ChatGPT is not a better search engine and not a chatbot. It is a conversational interface to a large language model that follows instructions and maintains context across a conversation. The difference from previous LLM products is the instruction-following: you can tell ChatGPT to write, summarise, translate, explain, debug, or code, and it attempts to do exactly what you specified, not just generate related text.
The coding capability
ChatGPT can write, explain, and debug code. The quality for common patterns is high. The failure modes are subtle: it can produce code that looks correct but has logic errors that only surface in edge cases. For code review and explanation, where a human verifies the output, the capability is immediately useful. For unsupervised code generation, verification is essential.
What it means for search
The use case that most excited early users was asking ChatGPT questions instead of searching. For questions with well-defined answers that do not require real-time information, ChatGPT often produces a more direct answer than a search results page. The limitation is the training cutoff and the lack of citations for verification. The user experience is so dramatically better for many queries that the information quality limitation was overlooked, which produced the backlash about hallucinations.
The developer experience shift
Developers who used ChatGPT in December 2022 changed how they approached coding tasks. Writing boilerplate, looking up API signatures, debugging syntax errors, generating test cases: all of these became conversational tasks. The IDE-centric workflow that had been stable for 30 years started to change at the edges.