
sidebar.wechat

sidebar.feishu
sidebar.chooseYourWayToJoin

sidebar.scanToAddConsultant
From 2025 to 2026, almost every large enterprise leader faces the same question:
"How should our AI projects be implemented?"
In the past two years of research, we've found enterprise AI implementation almost all falls into an "impossible triangle" -
The three seem mutually exclusive. So enterprises typically choose between two paths, but each has its own dilemmas.
Today, we want to explore the third path from a strategic perspective - not self-building LLMs, not developing Agent frameworks from scratch, but capability orchestration: using top-tier LLMs as the engine, combining mature skill systems and Agent templates to quickly assemble complete data analysis capabilities tailored to enterprise business needs.
This path, AskTable has already validated with some leading enterprise cases. This article will dissect this path's logic, architecture, and selection methodology.
The temptation of self-building LLMs is straightforward:
So many enterprises' AI roadmap looks like this: Form AI team → Collect industry data → Train or fine-tune model → Deploy → Iterate continuously.
We've interacted with several enterprises trying or currently self-building LLMs, finding sunk costs mainly come from these dimensions:
First, talent costs are severely underestimated.
Training and maintaining an LLM requires more than just algorithm engineers. A complete team typically includes:
Such a team in first-tier cities typically costs 8-15 million yuan annually. More importantly, these talents are extremely scarce on the market, with recruitment cycles usually 6-12 months.
Second, computing power investment is like a bottomless pit.
A complete LLM training, from data preparation to model convergence, easily costs millions to tens of millions. Even choosing to fine-tune existing open-source models (like Qwen, LLaMA) requires GPU cluster support.
More critically, this isn't a one-time investment - models need continuous updates, data needs continuous labeling, effect needs continuous evaluation.
Third, risk of under-delivery
Even after investing talent and computing power, the resulting industry model may not be much better than calling a top API in many scenarios. Reason is simple: general LLM capability iteration speed far exceeds any enterprise's self-built model iteration speed.
When you spend half a year fine-tuning a model, the base model may have iterated 2-3 major versions, capability gaps opening again.
Self-building LLMs isn't impossible, but only suitable for the few leading enterprises with long-term strategic investment capability and talent reserves. For the vast majority of enterprises, this is a path with high investment, long cycle, high risk.
Since self-building LLMs costs too much, many enterprises choose the second path: integrate open-source Agent frameworks (like LangChain, AutoGen, CrewAI, etc.) and build their own intelligent applications based on existing LLM APIs.
This path sounds very reasonable - no need to maintain AI teams, no need to buy computing power, just a few development engineers can handle it.
But reality is often more complex than imagined.
First, Prompt debugging black hole.
Agent framework core is Prompt engineering. But Prompt debugging is a black box process extremely dependent on experience:
Many teams spent weeks or even months on Prompt debugging, finally finding effect still unstable.
Second, tool development fragmentation.
Agent frameworks provide architecture, but each tool needs self-development:
Each tool's development, testing, maintenance requires investment, and collaboration logic between tools also needs self-design.
Third, productionization gap.
From Demo to production environment, there's a huge gap:
These problems have no ready-made answers in Agent framework documentation, all need self-solving.
Agent frameworks lower AI expertise requirements but significantly increase engineering capability requirements. It's suitable for teams with strong development capabilities willing to continuously invest in debugging and maintenance. But for enterprises wanting "out-of-the-box" usage, this is also a path full of uncertainty.
Capability orchestration's core idea: Don't spend energy on wheel-building, but combine top capabilities to form complete solutions for business scenarios.
Specifically includes three layers:
┌──────────────────────────────────────────────────┐
│ Application Layer (Business-facing) │
│ Industry Agent Templates | Custom Skills | Conversational Analysis Interface │
├──────────────────────────────────────────────────┤
│ Capability Layer (Core Orchestration) │
│ 11 Skills | Memory System | Tool System | Knowledge Retrieval │
├──────────────────────────────────────────────────┤
│ Engine Layer (Model-driven) │
│ Qwen 3.6-Plus | Claude Opus | Other Top Models │
└──────────────────────────────────────────────────┘
Engine layer doesn't need self-building, directly integrate top models like Qwen 3.6-Plus, Claude Opus 4.5. These models' capability iteration speed far exceeds any enterprise self-built team.
Capability layer provides a standardized skill system - anomaly detection, attribution analysis, trend prediction, data visualization - these are universal core capabilities in data analysis domain, already validated through extensive real scenarios.
Application layer combines Agent templates facing different industries based on capability layer skills, which enterprises can use directly or customize on top.
Capability orchestration succeeds as the third path because it solves each of the first two paths' pain points:
Compared to self-building LLMs:
Compared to Agent framework development:
This path works due to an important industry background: capability gaps between top LLMs are narrowing.
Taking Qwen 3.6-Plus as an example, its programming capability improved 2-3x over previous generation, already comparable to or exceeding some international top models in code generation, data analysis, and complex reasoning scenarios.
This means enterprises no longer need to agonize over "which model is best" when selecting models, but can flexibly choose based on actual factors like cost, compliance, response speed. What truly differentiates isn't the model itself, but how model capabilities are orchestrated into complete business solutions.
This is precisely capability orchestration's value.
AskTable's engine layer supports multiple LLMs, including but not limited to:
Engine layer design philosophy is "plug-and-play models" - enterprises can choose the most suitable model based on cost, performance, compliance requirements without affecting upper-layer capabilities.
More importantly, when underlying model upgrades, upper-layer capabilities automatically benefit. For example, after Qwen 3.6-Plus release, all AskTable Agents using Qwen engine automatically gained stronger programming and analysis capabilities without any code changes.
Capability layer is AskTable's core competitiveness, containing 11 battle-tested data analysis skills:
| Skill | Capability Description | Typical Scenario |
|---|---|---|
| Anomaly Detection | Auto-identify anomaly points and outliers | Production quality monitoring, transaction anomaly discovery |
| Attribution Analysis | Analyze root causes of metric changes | Sales decline analysis, cost fluctuation tracing |
| Prediction Trend | Predict future trends based on historical data | Sales forecasting, inventory planning |
| Data Cleaning | Auto-handle missing values, duplicates, format issues | Multi-source data integration |
| Statistical Analysis | Descriptive stats, hypothesis testing, correlation analysis | Market research analysis |
| Comparative Analysis | Multi-dimensional comparison, discover difference patterns | Competitor comparison, regional difference analysis |
| Clustering Analysis | Auto-group, discover inherent data structure | Customer segmentation, product classification |
| Association Analysis | Discover relationships between variables | Market basket analysis, behavior correlation |
| Data Visualization | Auto-select optimal chart types and generate | Report generation, data display |
| SQL Generation | Natural language to SQL, auto query | Non-technical self-service data retrieval |
| Report Generation | Auto-write structured analysis reports | Daily, weekly, monthly reports |
These skills aren't simple feature lists but capability modules经过大量场景验证, carefully optimized, stably reusable.
Based on capability layer skill combinations, AskTable provides 9 out-of-the-box industry Agents covering manufacturing, finance, retail, healthcare, education, energy, logistics, internet, government and other major industries.
Each industry Agent is optimized for specific industry's data structure, analysis scenarios and decision habits:
Meanwhile, AskTable supports custom Skills - enterprises can develop exclusive skill modules based on their business needs, seamlessly integrated with built-in skills.
Capability orchestration's core logic can be summarized in one sentence:
Use the best models (engine layer) to drive the most mature capabilities (capability layer), assembling the most suitable business solutions (application layer).
This architecture's advantages:
When choosing AI implementation path, enterprises should evaluate from these dimensions:
| Evaluation Dimension | Self-Build LLM | Agent Framework Dev | Capability Orchestration |
|---|---|---|---|
| Initial investment | 10M+ | 500K-2M | 100K-500K |
| Deployment cycle | 1-3 years | 3-6 months | 2-4 weeks |
| Talent requirements | AI team (10+) | Developers (3-5) | Business staff + IT support |
| Maintenance cost | Extremely high (continuous) | High (continuous debugging) | Low (platform handles) |
| Effect stability | Depends on team capability | Fluctuating | Stable |
| Model iteration dividends | Need proactive follow-up | Need adapt to new versions | Auto-benefit |
| Industry adaptation speed | Slow | Medium | Fast |
Does your enterprise have these conditions?
│
├── AI special budget of 50M+?
├── AI professional team of 10+?
├── AI construction time window of 2+ years?
├── Clear industry model differentiation demands?
│
└── If all YES → Consider self-building LLM
│
Otherwise:
│
├── Does your enterprise have full-stack development team of 5+?
├── Willing to invest 3-6 months for Agent development and debugging?
├── Can accept effect fluctuation and continuous maintenance?
│
└── If all YES → Try Agent framework development
│
Otherwise:
│
└── Capability orchestration (AskTable) is your optimal choice
- Deploy in 2-4 weeks
- Out-of-the-box
- Stable and reliable
- Continuously enjoy model iteration dividends
Even for enterprises capable of self-building LLMs, we suggest first use capability orchestration to verify business scenarios and value, then consider whether self-building is needed.
Reason is simple: capability orchestration lets you see AI's actual effect in business scenarios within weeks, while self-building needs 1-2 years to see results. Guiding strategic decisions with actual effect is much more reliable than guiding technology investment with strategic imagination.
Reviewing enterprise IT development history, you'll find a pattern:
But ultimately, what enterprises truly care about was never the underlying technology choice, but how to use these technologies to solve business problems.
LLMs and cloud infrastructure are moving toward homogenization and commoditization. Future enterprise core competitiveness isn't which model they use, but how model capabilities are orchestrated into complete business solutions.
Based on industry observation, we predict three trends in the next 3-5 years:
Trend 1: Models as infrastructure
Enterprises no longer need to care about underlying models, just as they don't need to care about which database version they use now. Model capability differences will narrow, access costs will decrease, switching will be seamless.
Trend 2: Capabilities as assets
Enterprise AI assets will no longer be "trained models" but "orchestrated capability combinations" - which skills, how to combine, adapted to what scenarios, these are the truly壁垒 things.
Trend 3: Agents as employees
Industry Agents will increasingly resemble "digital employees" - with their own professional skills, work experience, learning ability, capable of independently completing specific position's data analysis work. Enterprise human resource planning will start including "digital employee" headcount.
AskTable continues deepening in capability orchestration:
Our vision: Let every enterprise拥有了最低成本、最快速度,拥有世界级的数据分析能力.
Enterprise AI implementation was never an either-or选择题.
Self-building LLMs has its value, Agent frameworks have their applicable scenarios. But for the vast majority of enterprises wanting to quickly see AI value while not over-investing in infrastructure, capability orchestration is a pragmatic, efficient, sustainable path.
This path's essence: Use others' engines, orchestrate your own capabilities, solve your own problems.
When your competitors are still recruiting for self-built models and getting frustrated with Agent debugging, you've already used capability orchestration to run through business scenarios and get business results.
This is probably the smartest approach to enterprise AI implementation.
sidebar.noProgrammingNeeded
sidebar.startFreeTrial