4 min read

Plenty of questions

The emergence of ChatGPT has raised plenty of questions; is ChatGPT a Jurassic Park moment; persistent inflation equals increased industrial action; and the AI alignment problem.
Plenty of questions
Photo by Huang Yingone / Unsplash

1—Plenty of questions

ChatGPT burst onto the digital scene to great fanfare a couple of weeks ago, with over two million people signing up. Tech folks are excited, despite the AI powering it being identical to what they were using to "make images a few weeks ago". But according to Benedict Evans, ChatGPT still raises plenty of questions:

"How does this generalise? What kinds of things might turn into a generative ML [machine learning] problem? What does it mean for search (and why didn't Google ship this)? Can it write code? Copy? Journalism? Analysis? And yet, conversely, it's very easy to break it - to get it to say stuff that's clearly wrong. The wave of enthusiasm around chat bots largely fizzled out as people realised their limitations, with Amazon slashing the Alexa team last month. What can we think about this, and what's it doing?"

Evans points out that despite its impressive question-answering abilities, ChatGPT suffers from the "inherent limitation that such systems have no structural understanding of the question - they don't necessarily have any concept of eyes or legs, let alone 'cats'."

"I think this is why, when I ask ChatGPT to 'write a bio of Benedict Evans', it says I work at Andreessen Horowitz (I left), worked at Bain (no), founded a company (no), and have written some books (no). Lots of people have posted similar examples of 'false facts' asserted by ChatGPT. It often looks like an undergraduate confidently answering a question for which it didn't attend any lectures. It looks like a confident bullshitter, that can write very convincing nonsense. OpenAI calls this 'hallucinating'."

You can read the full post by Benedict Evans here (~8 minute read), in which he notes that with this type of AI (machine learning) "there are always humans in the loop", it's just not entirely clear what role they will play.


2—AI's Jurassic Park moment

AI is becoming good at producing "text and images that look remarkably human-like, with astonishingly little effort". Really good. And that "is, or should be, terrifying":

"The core of that threat comes from the combination of three facts:

• these systems are inherently unreliable, frequently making errors of both reasoning and fact, and prone to hallucination; ask them to explain why crushed porcelain is good in breast milk, and they may tell you that 'porcelain can help to balance the nutritional content of the milk, providing the infant with the nutrients they need to help grow and develop'. (Because the systems are random, highly sensitive to context, and periodically updated, any given experiment may yield different results on different occasions.)

• they can easily be automated to generate misinformation at unprecedented scale.

• they cost almost nothing to operate, and so they are on a path to reducing the cost of generating disinformation to zero. Russian troll farms spent more than a million dollars a month in the 2016 election; nowadays you can get your own custom-trained large language model, for keeps, for less than $500,000. Soon the price will drop further."

That's from AI scientist Gary Marcus, who describes ChatGPT's emergence as AI's Jurassic Park moment, because the possibility of "unintended and unanticipated consequences" raises a point similar to the one brought up by fictional scientist Ian Malcom's (Jeff Goldblum):

"Your scientists were so preoccupied with whether they could, they didn't stop to think if they should."

Do read the full essay from Marcus here (~5 minute read).


3—The winter of discontent

Unanticipated inflation only works at reducing debt and real wages in the short-run. Workers aren't stupid and eventually notice their wages being fleeced, and they'll respond on multiple margins – including strikes.

4—The AI alignment problem

How do we mere humans ensure that AI does what we want it to do, and doesn't do something really bad like, say, wipe out humanity? That's a question known as the 'AI alignment problem', which gained in prominence "following the 2014 bestselling book Superintelligence by the philosopher Nick Bostrom". Melanie Mitchell provided a helpful summary:

"Computers frequently misconstrue what we want them to do, with unexpected and often amusing results. One machine learning researcher, for example, while investigating an image classification program's suspiciously good results, discovered that it was basing classifications not on the image itself, but on how long it took to access the image file — the images from different classes were stored in databases with slightly different access times. Another enterprising programmer wanted his Roomba vacuum cleaner to stop bumping into furniture, so he connected the Roomba to a neural network that rewarded speed but punished the Roomba when the front bumper collided with something. The machine accommodated these objectives by always driving backward.

But the community of AI alignment researchers sees a darker side to these anecdotes. In fact, they believe that the machines' inability to discern what we really want them to do is an existential risk. To solve this problem, they believe, we must find ways to align AI systems with human preferences, goals and values."

Mitchell thinks solving the conundrum won't be easy, in part because "we cannot even define the problem". For example, is it even possible for a machine to achieve "superintelligence without having any of its own goals or values"? For AI built using machine learning such as ChatGPT – i.e. one that requires human input to do anything – the solution might be as simple as thinking about it, at least in part, as a cybersecurity problem.

You can read Mitchell's full essay here (~6 minute read).


5—Further reading...

📊 "9 charts that show the [US] economy is kind of a mess right now."

🐀 Rats fleeing a sinking ship: "One of Sam Bankman-Fried's close associates told Bahamian regulators in the days before FTX collapsed that the now-disgraced founder had likely funnelled customer money to his hedge fund, a move that helped accelerate the 30-year-old's downfall."

📣 "If Elon rules Twitter as essentially a conservative-leaning moderator, that's much less bad for society than what Twitter's old management did, which was to stoke maximum ideological combat to boost engagement."

📉 "[I]t is not hard to imagine that in a few months we will be able to conclude that the fight against inflation is indeed over. So it also makes sense for the Fed to scale back the size of the increase in interest rates as it gets ready to declare victory."

💉 Different risk tolerance: Unvaccinated individuals had "a 72% increased relative risk [of getting into a car crash] compared with those vaccinated".

🔮 "Academics checked 25 years of Australian economist forecasts for accuracy. Treasury did badly."