1 comments

  • MarcLore 2 hours ago
    I went through the 68-page model card. Here are the highlights.

    Pricing (per 1M tokens, input/output):

    GPT-5.2 High: $1.75 / $14.00

    Claude Opus 4.5: $5.00 / $25.00

    Gemini 3 Pro: $2.00-4.00 / $12.00-18.00

    Seed2.0 Pro: $0.47 / $2.37

    Seed2.0 Lite: $0.09 / $0.53

    Seed2.0 Mini: $0.03 / $0.31

    Pro output tokens are 6x cheaper than GPT-5.2, 10x cheaper than Claude Opus 4.5.

    Performance is not a compromise. IMO 2025 Gold Medal (35/42), CMO 2025 Gold Medal (114/126), Codeforces Elo 3020, LMSYS Vision Arena #3 as of Feb 16. LiveCodeBench v6: 87.8% vs GPT-5.2's 62.6%.

    The most unusual part: they explicitly admit where they fall short. From the model card: "Seed2.0 Series have considerable gaps with Claude in terms of coding, taking SWE-Evo and NL2Repo as examples. Relatively obvious gaps with Gemini in terms of long-tail knowledge, taking SuperGPQA and SimpleQA-Verified as examples." When was the last time you saw that in a model release?

    There's also interesting usage data from ByteDance's MaaS platform in China. Frontend dev dominates coding queries at 50%+. Vue.js adoption is 3x React. Most common task is bug fixing, not greenfield development. The internet sector consumes the overwhelming majority of API traffic while manufacturing and automotive are each under 1%.

    Three sizes: Pro for complex reasoning and agent tasks, Lite for balanced production use, Mini for high-throughput batch workloads at near-zero cost.

    Model card PDF: https://lf3-static.bytednsdoc.com/obj/eden-cn/lapzild-tss/lj...