July 05, 2023
“Would you rather motivate me with some ice-cream or focus my attention with the aid of a shake?”
This was a well framed prompting question asked by my 8-year-old daughter.
I don’t recall which we opted for, but the outcome was certainly better than had she just said: “I want ice-cream!”
This situation came to mind—and made me chuckle—as I was playing around with ChatGPT and other generative AI tools.
The art of asking good questions—which has always been an asset—is increasingly becoming a science.
And today, we can interrogate vast sets of data, made accessible to businesses and individuals by tools like ChatGPT.
The ability to engage with generative AI through iterative cycles of ever-improving prompts has proven valuable to me in situations as diverse as vacation planning and coming up with talking points for a conference on hydrogen.
For the latter—a relatively obscure, work-related case—I felt more acutely the limitations of the algorithms’ training data.
As much as I tried, it was impossible to coax out references to recent policy or regulatory support.
There were a few mentions of the CHIPS and Science Act but nothing specific about the Infrastructure Investment and Jobs Act or the Inflation Reduction Act.
My more technically versed colleagues confirmed that this is because the models were trained before such information became available.
I was thus given an interesting opportunity to blend good old-fashioned search and reading with the synthesized outputs from generative AI.
Looking for keywords, reading around the subject, and creating one’s own summary, versus entrusting it to a tool that provides a succinct, concise, and surprisingly easy to digest response. Both are parts of our toolkit now.
The blind spots imposed on users by either curator—be it a generative AI engine or an AI-assisted search engine—need to be well understood.
There are concerns about randomization, introduced to deliver different responses to the same prompt over time (or to similar prompts submitted at the same time) to reduce the impact of bias.
This reminds me of my youth, when we would go to the library in search of answers.
The library was considered a sacred source—it held the truth! Or so we believed, until someone pointed out the bias introduced by the selection of books placed in the library by the librarian, unwittingly introducing her biases into our community.
Similarly worded but subtly different prompts can yield vastly different outcomes.
For example, those of us in the commodities trading business will recognize EFP as an acronym for Exchange of Futures for Physicals, EFS as an acronym for Exchange of Futures for Swaps, and EFRP, an official acronym used by commodities legislators and regulators.
Ask ChatGPT similarly worded prompts using the various acronyms and you’ll get starkly different results. Try the experiment for yourself to see what I mean.
Another quick experiment is to ask for some sort of recommendation within a budget.
Because the price data used when training the AI model ages rapidly, any price-based recommendations quickly lose their relevance.
The algorithm would require near-constant updating to cope with a dynamic pricing environment impacted by inflation and other macro factors.
This quickly defeats the scaremonger’s argument that AI will soon replace traders and schedulers.
It’s far more important for us to focus on leveraging AI as an assistant than worrying about it as a threat of replacement.
And lastly, AI algorithms have little knowledge of what’s fact and what’s fiction.
They are the latest, greatest embodiment of “garbage in, garbage out.”
Would you want to make a high-stakes trade based on information of questionably provenance?
Far better to have an AI assistant suggest possible interpretations of real-time data you can see on your dashboard, leaving the final decision as to what makes sense in the hands of an experienced human.
Are there viable use cases for generative AI within the world of natural gas scheduling?
We tested it on some short-term logistics use cases, such as reacting to an outage.
A conventional approach would be to consider the known, available options and scramble to find the “least sub-optimal” solution on which to fall back.
Here we feel the burden of our experience; we immediately focus on options with which we are familiar and that can be executed most quickly.
Testing some prompts with the AI engine, we immediately noticed an innovation in possible routing options.
ChatGPT was assembling ideas that we would not intuitively have considered.
While we realized that implementing such creative alternatives would be unrealistic in response to a short-fuse outage event, it prompted us to consider long-term routing changes that could potentially deliver both economic and carbon savings.
Significant power lies in having these new tools present options that we might not otherwise consider.
The randomization effect mentioned earlier—which proved an irritation and sometimes caused our data science models to fail—suddenly transformed into an asset.
When used to create multiple random paths, it sometimes produces routings that are both unique and potentially lucrative.
It is easy to imagine exponentially increasing utility as it becomes possible to rapidly retrain AI models with dynamic data.
Similarly, exposing models to proprietary data sets and weighting elements will create private utility rather than generic, everyone-can-do-this functionality.
The skill of asking the right question will compound with our ability—and willingness—to stray beyond the boundaries of our experience.
We must think of the internet as a set of data lakes to which we can add our own data pond, then randomly experience alternative answers to our carefully crafted questions.
This ability to wonder—and wander—is perhaps the greatest gift offered by this generation of GPT-based toolkits.
However, before you attach your name to the output, remember that, with randomness introduced, sometimes, "my ChatGPT is better than your ChatGPT"!
Tags: Digital Transformation