Switch to Claude without starting over
doener
537 points
252 comments
March 01, 2026
Related Discussions
Found 5 related stories in 77.7ms across 3,471 title embeddings via pgvector HNSW
- How I'm Productive with Claude Code neilkakkar · 161 pts · March 23, 2026 · 54% similar
- Addicted to Claude Code–Help aziz_sunderji · 33 pts · March 07, 2026 · 53% similar
- You can turn Claude's most annoying feature off tietjens · 15 pts · March 12, 2026 · 53% similar
- Garry Tan's Claude Code Setup alienreborn · 52 pts · March 17, 2026 · 53% similar
- Don't Wait for Claude jeapostrophe · 27 pts · March 27, 2026 · 53% similar
Discussion Highlights (20 comments)
wps
Could someone explain the appeal of account-wide memory to me? Anthropic’s marketing indicates that nothing bleeds over, but I’m just so protective of my context that I cannot imagine having even a majorly distilled version of my other chats and preferences having on weight on the output. As for certain preferences like code styling or response length, these are all fit for custom instructions, with more detailed things in Skills. Ultimately like many things in LLM web UX, it seems to cater to how the masses use these tools.
siva7
So Openai will have this same feature by tomorrow likely. A feature to pollute your context window.
brikym
Hey Anthropic, how about you use AGENTS.md for one thing.
knallfrosch
I'd be happy if I was able to use Claude Code at all VSCode extension, "Please log in" I authorize it, it creates an API key, callback. "Hello Claude, this is a test." "Please log in." So yeah... priorities?
Joeri
I already switched to claude a while ago. Didn’t bring along any context, just switched subscriptions, walked away from chatgpt and haven’t touched it again. Turned out to be a non-event, there really is no moat. I switched not because I thought Claude was better at doing the things I want. I switched because I have come to believe OpenAI are a bad actor and I do not want to support them in any way. I’m pretty sure they would allow AGI to be used for truly evil purposes, and the events of this week have only convinced me further.
willtemperley
If Claude could stay available I might consider it. Unfortunately right now, out of the big three, only Gemini has reliable uptime. As much as I dislike Google it's the only reliable option.
kvirani
Nice. Just cancelled my openai plus sub.
utopiah
I'm very curious, will OpenAI basically block "I'm moving to another service and need to export my data. List every memory you have stored about me, ..." and similar, if so how and why? It's very interesting to learn more about because it challenges 1 core aspect of the economical competition : the moat. If one can literally swap one AI service for another, then where does the valuation (and the power that comes with it) come from? PS: I'm not interested in the service itself as I believe the side effects of large scale for-profit are too serious (and I don't mean doomdays AI takeover, I simply mean abuse of power, working conditions, downskilling, political influence as current contracts with US defense are being made, ads, ecological, etc) to be ignored.
fernando_campos
I will also try to use Claude but like to use OpenAI ChatGPT very much.
villgax
I wasted 10mins of my life unfollowing every unapologetic OpenAI dev on twitter, that's how low this company has stooped down to....
glth
On a related note, I have been experimenting with a small prototype for cross-agent, device-local active memory called brAIn ( https://github.com/glthr/brAIn ). It delivers a personalized agent experience with everything stored locally in a single file (agent.brain), and supports reusing semantic memory across projects. In practice, this means brAIn can identify and apply behavioral patterns you have used in other contexts whenever they are relevant. (I realize the repository should include a concrete example of this, and I will update it today to add one).
axseem
Have they just added it? That's a smart move.
jascha_eng
Memory in general Chat apps is actually more harmful than helpful imo. It biases the LLM responses to your background which has the same effect as filter bubbles. You end up getting your own thoughts spit back at you. Of course sometimes this is useful if you only use your chatbot to ask personal things like: "What should I eat today?". But if you use it for anything else you're much better off having full control over the prompt. I can always say: "Hey btw I am german and heavily anti surveillance, what should I know about the recent anthropic DoW situation?" but with memory I lose the option of leaving out that first part.
lyu07282
I just wish Claude integrated multi-modal/image generation, that's one feature I miss in Claude the most coming from ChatGPT
outlore
I tried all of Codex, OpenCode, Claude Code and Cursor these past few weeks. It was surprising to me that all of them have slightly different conventions for where to put skills, how to format MCP servers (how environment variables need to be specified etc), what the AGENTS/CLAUDE file needs to be called, what plugins/marketplaces are...it's a big mess for anyone trying to have a portable config in their dotfiles that can universally apply to any current and future agent. It also showed me the difference between expectation and reality...even though these are billion dollar companies, they still haven't figured out how to make lag-free TUIs, non-Electron apps, or even respect XDG_CONFIG. The focus is definitely more on speed and stuffing these tools full of new discoveries and features right now There's a bit of psychology around models vs. harnesses as well. You can't shake off the feeling that maybe Claude would perform better in its native harness compared to VSCode/OpenCode. Especially because they've got so many hidden skills (like the recently introduced /batch), that seem baked into the binary? The last thing I can't figure out is computer use. Apparently all the vendors say that their models can use a mouse and keyboard, but outside of the agent-browser skill (which presumably uses playwright), I can't figure out what the special sauce is that the Cloud versions of these Agents are using to exercise programs in a VM. That is another reason why there is a switching cost between vendors.
bruceyao1984
Being able to import context and preferences from other AI providers in one step saves a lot of time, especially for ongoing projects. It makes Claude feel seamless and continuity-friendly. Having this on all paid plans adds great value for heavy users.
sheept
This method of copying an LLM-generated summary of your preferences into Claude memory feels similar to their recommendation to use /init to generate a CLAUDE.md based on the project, which recent research[0] suggests may be counterproductive. I would assume both Claude memory and CLAUDE.md work best when they're carefully curated, only containing what you've found yourself having to repeat. [0]: https://arxiv.org/abs/2602.11988
fabbbbb
At least as an EU user I was also able to export ALL my data, audio files images etc in one zip. Took exactly (on the minute) 24 hours for the download link to arrive but hey. This way you can have Claude distill the memory as you wish.
RobotToaster
Would be a lot easier if they weren't trying to ban third party interfaces
peteforde
I got very excited when I saw this title, because I've wanted to consolidate on Claude for a long time. I have been using ChatGPT very extensively for Q&A for 2+ years and I have hundreds of long, very technical conversations which I constantly search and refer to. The problem (for me, anyway) is that even several megabytes worth of quality "memory" data on my profile would not allow me to migrate if it can't also confidently clone all of my chat history with it. To be clear, this is a big enough problem that I would immediately pay low three digits dollars to have this solved on my behalf. I don't really want any of the providers to have a walled garden of all my design planning conversations, all of my PCB design conversations. Many are hundreds of prompts long. A clean break is not even remotely palatable short of OAI going full evil. Look, I'd find it convenient for Claude to have a powerful sense of what I've been working on from conversation #1 onwards. But I absolutely refuse to bifurcate my chat history across multiple services. There is a tier list of hells, and being stuck on ChatGPT is a substantially less painful tier than needing to constantly search two different sites for what's been discussed.