The nickname is "inference games" where they have to increase cost or switch to a dumber model to make money on fixed cost plans. Whereas Kilocode, Cline, Aider are completely pay as you go and allow cheaper models, Grok Code is 0.2$/MT, Sonnet is 3$/MT and I found the former perfect.
I absolutely agree, Qwen Coder 480b is my go to, I'm also paying for OpenAI Plus/Codex and Cursor and none of those models ever give me an edge over Qwen. One can always setup an "ultrathink" model for when push-comes-to-shove, I use o3 through the api, but I seldom use it. Same applies to a large context (ie 1M) alternative, image processing etc. I'm doing an average of 120 M tokens/day that way and couldn't be happier.
They promised the legacy $30 plan would get grandfathered into the $50 tier, and they're not only abandoning that they're actually giving it a bit less than 60% as many credits?
The nickname is "inference games" where they have to increase cost or switch to a dumber model to make money on fixed cost plans. Whereas Kilocode, Cline, Aider are completely pay as you go and allow cheaper models, Grok Code is 0.2$/MT, Sonnet is 3$/MT and I found the former perfect.
I absolutely agree, Qwen Coder 480b is my go to, I'm also paying for OpenAI Plus/Codex and Cursor and none of those models ever give me an edge over Qwen. One can always setup an "ultrathink" model for when push-comes-to-shove, I use o3 through the api, but I seldom use it. Same applies to a large context (ie 1M) alternative, image processing etc. I'm doing an average of 120 M tokens/day that way and couldn't be happier.
One less musical chair
They promised the legacy $30 plan would get grandfathered into the $50 tier, and they're not only abandoning that they're actually giving it a bit less than 60% as many credits?
Who decided on this?
The comment section on Reddit for that attempt to defuse is not looking pretty.
There is no such thing as a free lunch!