AskTable
sidebar.freeTrial

Rising with the Tide or Drowned by the Wave

AskTable Team
AskTable Team 2026-04-07

Here's the thing.

Early 2024, the hundred-models war. That period was particularly lively - open-source models kept popping up one after another. Today this 70B parameter model was released, tomorrow that 100B parameter model went open source. Many people rubbed their hands in excitement, eagerly running various model local deployments, even starting to fine-tune with their own data.

Honestly, at that time everyone's state was more like playing with a really cool new toy. See something new come out, just get it running first. Play around with your own data, tweak parameters, see the results - it was quite interesting.

But after playing around, a very practical question presented itself.

So, how exactly do I combine this with my business scenarios?

Honestly, this question troubled me for quite some time back then. Watching the fiery hundred-models war on the market, I kept asking myself, can we in product find a truly implementable direction in all this excitement?

At that time, AskTable - we internally called it RMB, the Trusted Metadata Brain - was just starting from a technical prototype. The team's judgment was simple: LLMs were advancing fastest in programming because it formed a complete closed loop of self-development, self-use, and self-testing. So what about data analysis - could we let business people converse with data using natural language?

This question guided all our work over the next two years.

Honestly, I didn't know if this direction was correct at that time. I was still exploring myself. But curiosity pushed me forward.

Then I witnessed a particularly interesting cycle. From technological euphoria, to anxiety spreading, to rational return. Less than two years, like riding a roller coaster.

This article wants to share毫无保留 everything we learned - the pitfalls we stepped in, the signals we saw, the things we figured out - from two years of frontline product work. I don't know if it's useful to everyone, but I think this topic is worth writing.

The core question is just one. As LLM capabilities evolve at terrifying speeds every year, are we who build application-layer products rising with the tide or will we be drowned by the waves?

Tide and Ship - Metaphor for 2024-2026 AI Market Changes

2024 Hundred-Models War: From Technological Euphoria to Implementation Anxiety

The focus of technical discussions was parameters, training costs, benchmark rankings. These indicators were exciting, truly. But a company's CFO hearing "70B parameters vs 100B parameters" has absolutely no intuition about it. A sales manager hearing "context window expanded from 128K to 256K" doesn't mean they can make better decisions tomorrow.

This phase of AI was springtime for a few.

But in this spring, we saw a signal. LLMs' programming capabilities exceeded expectations by a wide margin. They could understand code logic, generate structured queries, handle complex semantic reasoning. If we could channel this capability into data analysis, letting business people ask questions in natural language while AI understood intent, generated queries, and returned results, that would be an entirely new paradigm.

So we went for it.

Our starting point was simple - trying to use LLMs for Text-to-SQL.

The first layer of difficulty quickly emerged. SQL generation is just the tip of the iceberg; the really hard problems lurk beneath the surface.

The same three characters "销售额" (sales amount), different companies have completely different definitions. Including tax or not, including returns or not, including discounts or not. Table name ord_dtl is just an identifier to the database, but AI needs to understand it as "order details." When a business person says "products that haven't been performing well recently," AI needs to know what specific metric "performing poorly" refers to. And there's permission control - different departments asking the same question should get different answers.

These aren't problems with model capabilities. They're problems of business understanding.

A model can write grammatically perfect SQL, but if it doesn't understand business semantics, the SQL it writes is wrong.

This phase of AskTable's product form was simple and rough. But it validated a key assumption. AI can indeed lower the barrier to data querying, but only if someone helps AI understand the business.

Honestly, this conclusion is quite counter-intuitive. Everyone thinks models are getting smarter and will soon understand everything. But the reality is, the smarter models get, the more they need someone to build a bridge between them and the enterprise.

The 2024 market ecology was also interesting. The self-media ecosystem was already quite mature, funding announcements were everywhere, and the narrative that "AI will change everything" filled the sky. But the silent majority of enterprises were still watching. Evaluating, waiting for others to step on the mines first.

Only a few groups started truly acting. Young entrepreneurs from Jiangsu, Zhejiang, the Yangtze River Delta, and Pearl River Delta regions. They weren't large in scale, but they were quick to react and had already begun tentatively introducing AI into their business processes.

This pragmatic attitude formed a sharp contrast with the market's clamor.

加载图表中...

2025 MCP and Agent: What to Do After Connecting

By 2025, a key change occurred.

AI began大规模 entering developers' daily work.

Tools like Cursor and Claude Code turned AI programming from concept to daily reality. More and more developers discovered that AI could not only complete code but also understand requirements, design architecture, and review code.

A broader discussion followed. Will AI replace programmers?

This discussion spread from the IT circle to all practitioners. Everyone began realizing that what they were doing - writing code, making spreadsheets, writing reports, doing design - AI could do all of it, and better and faster.

Product people naturally thought of a question. If AI can write code, can AI do data analysis directly?

The answer is yes. But this led to the next question.

When AI can do everything, what makes your product worth choosing?

This question troubled me for quite some time.

Then the MCP protocol emerged. MCP, Model Context Protocol, happened in 2025.

On the surface, it's just a protocol defining how AI models interact with external tools and data sources. But its significance goes far beyond that. MCP solved the AI island problem, allowing AI assistants to directly access enterprise data.

This is exactly what AskTable had been working on. We started thinking about the same problem - how to let AI access enterprise data safely, controllably, and with proper permissions - a few months before the MCP protocol. MCP gave us a standardized answer.

But the more important realization was a judgment we came to. When connections become standardized, connection itself is no longer a moat.

The real moat is what you do after connecting.

From AI islands, to MCP connections, to capability orchestration. These are three stages of technical architecture evolution, and also the technical path AskTable has traveled over these three years.

加载图表中...

In the second half of 2025, Agent skills evolved from a vague concept to definable, composable, orchestrated modules.

This isn't a simple naming change. Its essence is that AI capabilities moved from general conversation to structured capabilities.

AskTable's Skill system took shape during this period. Anomaly detection, attribution analysis, trend prediction, weekly report generation. These aren't just a random list of features - they are analysis capability encapsulations abstracted from extensive enterprise implementation practices and validated through real use.

Behind each Skill is massive understanding of business scenarios, alignment of data definitions, and refinement of output formats. They have value not because AI can do them, but because we know what enterprises need and encapsulated that need into AI-executable Skills.

For more on AskTable's AI Agent architecture design, see AskTable AI Agent Working Principles.

Back to the 2025 self-media ecosystem. This year formed a particularly clear three-layer structure in AI.

The first layer is technical evangelists. Those who truly understand technology and advance industry cognition - valuable. The second layer is anxiety sellers. "If you don't use AI now, you'll be eliminated" - creating panic for traffic. The third layer is course and tool sellers. Selling courses, tools, and communities in AI's name - profit-driven.

These three layers stacked together formed a huge noise field.

Truly pragmatic enterprise decision-makers felt even more confused in this noise field. Not because of insufficient information, but too much noise.

Our choice as product people was clear. Don't deify, don't sell anxiety, don't promise overnight success.

I've always believed AI's value to enterprises is real. But it's not a magic one-click effect - it's a systems engineering project requiring understanding, planning, and continuous investment.

2026 Paradigm Shift: Rising with the Tide or Drowned by the Wave

By 2026, the market showed a change that really excited me.

Everyone began realizing that AI's impact on productivity isn't a 10x or 100x gradual improvement, but a 1000x or 10000x paradigm shift.

The basis for this judgment comes from the front lines. AI projects inside enterprises changed from pilots to standard features. No longer need to convince bosses why to use AI - now discussing how to use it better. Entrepreneurs in Jiangsu, Zhejiang, Yangtze River Delta, and Pearl River Delta regions changed from understanding AI to using AI to transform business processes. From watching to acting.

Dynamic young entrepreneurs began systematically thinking: how can my company adapt to the AI era?

Thinking about it gets me excited.

But this magnitude leap also brought a new problem. When underlying AI capabilities evolve at 1000x speed annually, how do application-layer products built on these capabilities ensure their value isn't diluted by the models' own progress?

This is the question we think about most.

Using a metaphor to understand.

LLM capabilities are like water, rising every year. Application-layer products are boats floating on the water. If a boat's value is built solely on calling a good model, then yes, you'll rise when the water rises. But when the water recedes, or when others use a better model, you'll be drowned.

What really keeps the boat floating isn't the water. It's the boat's construction.

This metaphor leads to AskTable's most core strategic thinking. What have we done that won't be drowned by LLM progress? What kind of hull have we built that allows us to sail stably regardless of water level changes?

AskTable's Four Layers of Moat

Following this thinking, we mapped AskTable's four layers of moat.

The first answer: domain expertise.

SQL generation is just a technical action. What's really hard is understanding. Repurchase rate definitions differ completely in e-commerce, SaaS, and retail industries. Gross margin has different standards in different enterprises. Active user measurement standards vary wildly between B2B and B2C businesses. The same metric, same industry, same enterprise, can have different calculation methods at different times.

This knowledge isn't covered by model pre-training. It comes from extensive enterprise implementation practice accumulation, from in-depth dialogue with customers across different industries, from反复 refining business scenarios.

AskTable's accumulation in this area: serving over 260 enterprise customers, covering 9 industry agents, building a complete business semantics layer. This accumulation won't become outdated with the next model release. It's a product of time.

加载图表中...

The second answer: data infrastructure.

Models don't understand your database structure, permission system, or data quality status. They don't know which department's people should see which data, which fields need desensitization, which SQL executions might affect production environments, or how to ensure query continuity when data sources change.

AskTable's investment in this area: 20+ database adapters, row-level and column-level permission control, data masking, SQL safety guards.

Honestly, these are all dirty work. But these unsexy details are exactly what forms the moat that's hardest for models to replicate.

For more, see AskTable Data Source Integration Guide and AskTable Data Security Best Practices.

The third answer: product thinking.

This is the layer I'm proudest of at AskTable.

Our evolution path was from ChatBI to Canvas. This change wasn't because ChatBI was technically infeasible, but because we discovered that conversation isn't the right interaction method for data analysis.

The problem with ChatBI is it assumes data analysis is a linear Q&A process. But real data analysis is non-linear. You need to view multiple metrics simultaneously, make comparisons, find correlations, run verifications. Canvas architecture was built to solve this. It allows users to operate multiple analysis components simultaneously in an editable space, building analysis logic like assembling building blocks.

I stepped on this pitfall myself early on. We thought conversation was the future at first, but as we worked, we discovered what users really needed wasn't a chat window, but a space where they could反复推敲 (refine repeatedly), build, and share analyses.

For more thinking on this evolution, see From Chat to Canvas.

The final answer: user understanding.

Ultimately using AskTable aren't just technical staff, but business people, managers, and decision-makers.

Their need isn't a smarter AI, but more reliable answers. They don't need AI to show how complex SQL it can write - what they need is when they ask a question, to get an answer they can use. Correct definitions, permission-compliant, clearly formatted.

This is why we emphasize making every answer trustworthy. This trust doesn't come from the model's confidence, but from our strict control of data definitions, careful design of permission systems, and反复 refining output formats.

Four Universal Laws: Long-term Advice for AI Application-Layer Products

Following the above, let's continue.

These four moat layers are AskTable's personalized answers. But we think we can extract some universal laws from these two years of practice that have reference value for any AI application-layer product.

First: embrace rising water, but build the hull.

Actively integrate the latest models. Qwen 3.6-Plus, Claude, GPT-5, enjoy the capability dividend from each model iteration. AskTable's plug-and-play model architecture exists precisely for this. Upgrading the underlying model doesn't affect upper-layer capabilities.

But simultaneously, continuously accumulate things models can't replicate. Industry knowledge, data connections, product experience, user trust. These things don't become outdated when you switch models. They're products of time, sources of compound interest.

For how AskTable integrates the latest models, see AskTable First to Support Qwen 3.6-Plus.

Second: do the dirty work, not the PPT.

There are many beautiful AI concepts in the industry, but the key to implementation is handling those unsexy details. Data cleaning and definition alignment. Permission mapping and security verification. SQL verification and error recovery. Performance optimization and stability assurance.

These things can't be marketing material. But they determine whether the product is usable, good to use, and trusted. AskTable invests heavily in handwritten SQL parsers, evaluation set systems, and TDD principles. These investments don't show highlights on PPTs, but in production environments they're all moats.

Honestly, sometimes I wonder if spending this much effort on things others can't see is worth it? But every time I see customers running AskTable stably in production, it feels so reliable.

Third: from replacing people to augmenting people.

The mainstream narrative from 2024 to 2025 was AI will replace certain job positions. This narrative created massive anxiety but also massive misunderstandings.

2026's reality is clearer. AI doesn't replace data analysts - it lets every business person become a data analyst. It doesn't replace managers making decisions - it gives managers more sufficient data support when making decisions. It doesn't replace entrepreneurs thinking - it lets entrepreneurs spend time on things that truly need human judgment.

We discussed this topic more deeply in our previous article AI Won't Replace Data Analysts.

Fourth: long-termism is the only strategy.

AI market noise will continue. Anxiety marketing won't stop because anxiety is good business. Deification narratives will keep appearing because stories are more eye-catching than facts.

But products and enterprises that truly survive aren't the loudest at shouting slogans - they're the best at continuously delivering value.

From 2024 to 2026, AskTable iterated through multiple product versions and served over 260 customers. But the core direction never changed. Letting business people use natural language to get trustworthy data insights.

This ability to stay the course amid noise is precisely the biggest differentiated advantage in long-term competition.

In Conclusion

Returning to the opening question. Rising with the tide or drowned by the wave?

The answer depends on a choice. Are you building a model's shell or a value carrier?

Model shells become outdated with model upgrades. Today you built a shell using GPT-4, tomorrow GPT-5 comes out, your shell is old. Value carriers appreciate with time accumulation. Domain knowledge, data infrastructure, product thinking, user understanding - these things won't depreciate with the next model release; they'll only become more substantial through time's sedimentation.

In building AskTable, our choice was clear. Build a value carrier, not a model's shell. Invest heavily in places models can't reach. Business understanding, data governance, product design, user trust. Embrace every model upgrade, but never bet the product's lifeline on any single model.

The tide will keep rising. But a good ship doesn't fear water.

If you're interested in how AskTable helps enterprises with AI data analysis, welcome to visit our website for more information, or schedule an enterprise consultation to discuss with our team.

cta.readyToSimplify

sidebar.noProgrammingNeededsidebar.startFreeTrial

cta.noCreditCard
cta.quickStart
cta.dbSupport