Jul 14 / Benjamin Schumann

The future of simulation tools in the age of LLMs

100% AI-free writing & thinking*

* i enjoy both too much  

tldr;

  • “toy” simulation tools will die
  • “deep” and “wide” simulation tools stay relevant but will see see some hard years and need to adapt fast (see the end for tips)
  • LLM hype will see “trough of disillusionment” after which a new equilibrium between LLM-based toy models and advanced applications will be found.
  • The exact tipping-point between them is hard to predict but I have a personal guess (at the end)

True disruption

After decades of relatively stable (=tame? =linear) advancements in simulation tools, we currently do really see a proper disruption. Just go to any LLM tool and ask it to build a simulation model for you. It will comply. And really not be too bad at it. Quite mind-blowing.

This will change the market. But how? Here are my thoughts on the dynamics at play and what will come of it.

The current simulation world

I see three types of simulation tools out there, and they will be impacted differently:

“Toy” tools

The tools aimed at beginners or “lay” people that want to build simulation models without too much effort. They eternally promised “you can build models without any coding” (and implicitly “without any knowledge”). They shine visually and in simplicity but users soon hit the limits of the tool. 
There are many commercial and open-source tools in this category. 

NOTE: This term is not meant derogatory, these tools play a crucial role!

“Deep” tools

These are highly advanced, specialised simulation tools for very detailed applications. By default, they work at extreme precision and are used for very focused uses (CAD, CFD, to some extend some supply-chain and manufacturing tools).
There exist (typically extremely expensive) commercial and a few open-source tools

“Wide” tools

These are the Swiss army knifes of simulations. They basically let you do anything (including “toy” and “deep” applications) and have a steep learning curve.
There are not many of these and all are commercial. Open-source “wide” tools do not exist. To model “wide” with open-source, you essentially employ several open-source tools in parallel.

LLM advances

LLMs are currently still advancing in many dimensions. In terms of coding (which is what you need for ANY simulation model), the edge of capabilities is not yet in sight, in my view. You can already create interactive simulation models of simple systems with a few prompts.
IMPORTANT: The main disruption for simulation tools is not that the LLMs can build models. It is that the eternal promise of “toy” tools finally comes true: Anyone can now truly build simple simulation models with no knowledge or coding necessary.
This was just not true before, despite a lot of marketing claiming it 😜 
So we actually now enable a much wider group of people to enter a formerly exclusive club of “simulation modellers”. 

LLM limits

However, every technology follows an S-curve and LLMs will not be different (despite some claiming exactly that). 

LLM technology

LLMs are inherently stochastic. You will get a different model each time you prompt for one, even if the prompts are identical. While some make this out into a deal-breaker, don’t forget that this was the status-quo up until today anyway: Ask 5 simulation modellers to model the same system and you get 5 wildly different models. So I don’t see this as a big issue.

LLM prices

This is the far bigger limit. We are currently in the “luring-in” phase of LLM tools. You get a lot more for free than you should, to gain market share. This, too, will change. Once the market is saturated, prices will reflect the true cost of your queries. And currently, it is MUCH more expensive that what you are asked for. 

Despite (expected) gains in model efficiencies, those will be eaten up by offering better capabilities. Prices will go up.

The tug-of-war

This LLM disruption brings two related developments fighting out a new equilibrium:

Toys vs tools

On the one side, LLMs actually fulfil (for the first time) the promise of “toy” tools for enabling anyone to build simulation models without knowledge or coding.

On the other side, you will not be able to truly leverage what you built, without simulation knowledge or coding. 

The question is: How far will “toy” users take LLM modelling before the limits are reached? When does it get too tedious to refine via (ever more detailed) prompts?

Prompts vs code

To build truly advanced (“production-ready”, quantitative, trustworthy…) models, prompts will need to be more and more precise. You cannot (ever!) create a well-defined model from inherently imprecise language prompts.
At some point then, the prompts will have to be so specific, well-defined and exact that it becomes its own programming language. We will have come full-circle.
The question is: when will users realise that inflection point of diminishing returns of prompting vs using a tailored (“deep” or “wide”) simulation tool again?

The overall question

Combining these, I see one overall question: Where exactly will be the tipping-point between:
  • beginners/lay people applying LLMs directly for “toy” models
  • Using “deep” or “wide” simulation tools with their “highly advanced prompting” (aka programming language)?
My current personal prediction is this: 

Short-term

Users will try to use LLMs as much as possible, squeezing out every ounce of capability.

Long-term

Users will push the balance back towards "deep" and "wide" tools due to:
  • The trough of disillusionment
  • rising costs
  • Explainability: if you lack knowledge of simulations and/or coding, you blindly trust the LLM. This does not carry you very far in real-world applications
  • Accountability: boring, but many clients will not just trust your LLM-based model. May improve long-term
  • Adjustability: This already plays out in the world of programming: LLMs are great for prototypes and “toys” but they often steer you into a corner. Adding or adjusting features becomes exponentially more cumbersome

The overall result

Based on this, I think we will see that:

“toy” simulation tools will die

Commercial tools cannot justify their pricing, open-source tools loose their user-base

“deep” and “wide” simulation tools stay relevant but:

They will see see some hard years and need to adapt fast. How?
Offer native LLM support!
For example:
  • a “co-pilot” in the tool that monitors the model you build, offering explanations and tips
  • a “builder” that actually creates basic model constructs itself to save you time
  • a “tester” that automatically designs unit and integration tests and runs them natively

LLM hype will see “trough of disillusionment”

After that, a new equilibrium between toy models and advanced applications will settle. Price hikes and explainability will push the tipping-point towards “deep” and “wide” models
Created with