ChatGPT Code Interpreter: A new era of decision making

A year in the future, I receive a notification as I open my laptop to start the working day: sales were down vs. budget overnight. 

I open ChatGPT Code Interpreter and simply type “why were sales down overnight?”.

Less than two minutes later, my virtual analyst has sent me 3 hypotheses each of which have supporting data and rationale.


  1. Competitors reduced their prices yesterday, particularly for products competing with SKU152, resulting in lower sales of that category. This contributed roughly 25% of the variation.
  2. Meta ads weren’t as impactful as we thought it would be because one of our creative assets performed poorly as the product we were hero-ing had reduced demand. This contributed roughly 10% of the variation.
  3. Overall demand within the category was down because of the unusually hot weather. This accounts for the vast majority of the variation. 


Interesting. I write back to my virtual analyst: “Based on historical data, what is the best way of reacting to the above challenges?” 


  • The weather will be optimal on Friday – increase spend across Meta and Youtube by 12% from Thursday evening through to Friday night. 
  • Stick to high prices for SKU152 in order to prevent erosion to margin as it is optimal for long-term sales.
  • Think about creating more brand-level creative as current ratios are suboptimal based on historic meta-analysis of campaign data. 


“Thanks ChatGPT”, I write. 

“Please draft and send the required prompts to alter the above and message the relevant personnel across the organisation based on the org structure I shared previously. 

Despite your guidance, please reduce prices for SKU152 this weekend as I believe our competitors will stick prices back up. 

Please also adjust our budgets accordingly and ensure the exec dashboard reflects the change.”



You get the picture. 

You probably also recognise that this isn’t a new promise. You have seen tons of demos promising similar outcomes over the past 10 years, from consultancies to niche SaaS providers to agencies to large tech companies. 

So you are probably very sceptical that the future above will ever become a reality for a myriad of reasons:


“This is a pipe dream; getting and structuring all that data is the real issue, especially anything close to real-time!”

“The past doesn’t really represent the future, you can’t just automatically make decisions based off data!”

“How do you hold the machines accountable? How can you tell if they are spouting rubbish?”


And so on.

All of these are of course challenges, but are all able to be overcome individually with a little bit of thought. 


We are genuinely entering a new era where the promise of Clarity At Speed Decisioning (CASD) is becoming a reality. 


Notice: I’m not referring to it as data-driven decisioning which gives historic data a larger role than I’m comfortable with. But clarity at speed. Clarity – driven by access to the right information at the right time, but also driven by exceptional EQ and a read of the landscape. At speed – because finally we will have the tools at our disposal to mine data at the speed required to inform decisions (knowing why sales dropped overnight by next week is effectively useless information). 

What’s different this time? There are lots of small reasons and one big one: the power of free. I don’t mean free cost, but free effort. 

In general, the impact of taking effort from low to nothing drastically increases adoption (Ariely and others have shown this multiple times with pricing experiments). Cognitive Natural Language Processing means anyone (and I mean anyone) can ask the question of Code Interpreter in basic English. Even drag and drop tools (like Alteryx) require an intermediary step between desire and instruction; removing that reduces barriers to entry from low to negligible.

So adoption will be absolutely rife in a way that existing tools simply haven’t. Not only that, but we simply have access to more data now than ever before. 10 years ago, I was hard pressed to find basic census data in a usable format. The data-landscape is far more beautiful now. 


So what impact will all of this have? 

Well put it this way – I have already amended my hiring plan and job descriptions for the following 6 months.

Because the skills we need will shift drastically. 

  • More critical thinkers, fewer data churners: we can focus more on what information we need and why, not how we get it
  • More mathematicians, fewer coders: we can focus more on hiring people who have taken the time to understand Bayesian Probability, rather than how to code specific snippets in Python
  • More generalists, fewer siloed departments: anybody can type in their mother-tongue, not everyone can identify when data needs to be transposed in a certain way. The centralised team will become the “maths team” not the “data team”.
  • More strategists, fewer short-term tacticians: optimising for the short-term is a job for AI, not for people.


At a wider operational level it will change processes, people, technology and data architecture. So, quite a lot! 

At a wider societal level it should change what people study, how long they study for, how many days a week people work, where people work from. But that’s an article for another day…

For now, my message is hopefully loud and clear. Adopt, adapt, and rid yourself of any complacency.