top of page
  • Zdjęcie autoraJarosław Jamka

Past and Future of AI

This is a short history of AI (see graphic by Goldman). H/T to James Wong, who posted it on LinkedIn.



The biggest progress is the public debut of Chat GPT-3.5 in November 2022, followed by the quick (after only 4 months) debut of a much better version of Chat GPT-4.0.

An even better version of Chat GPT-4o debuts in May 2024.


But what's next?


From the hardware side (computing infrastructure in general), Nividia will not disappoint... the newest and fastest Blackwell chip is already in production, and the company is already working on even faster ones: Blackwell Ultra (will be available in 2025), Rubin (2026) and Rubin Ultra (2027) .


From the "software" side... we have more and better versions of LLM models ahead of us.


Leopold Aschenbrenner, ex-member of OpenAI's Superalignment team, now founder of an investment firm focused on artificial general intelligence (AGI) — has just posted a massive 165-page long AI essay. He asserts that we are about to see the linear trend of AI improvements towards AGI relatively soon.


If true... the current boom in AI shares is not even a "baby bubble" - we are at the very beginning...


Take a look at Leopold's chart (attached below) where we may be in 2028... ("Base Scaleup of Effective Compute").


Leopold Aschenbrenner:

"I make the following claim: it is strikingly plausible that by 2027, models will be able to do the work of an AI researcher/engineer. That doesn’t require believing in sci-fi; it just requires believing in straight lines on a graph".



Leopold Aschenbrenner:

"GPT-4’s capabilities came as a shock to many: an AI system that could write code and essays, could reason through difficult math problems, and ace college exams. A few years ago, most thought these were impenetrable walls.


But GPT-4 was merely the continuation of a decade of breakneck progress in deep learning. A decade earlier, models could barely identify simple images of cats and dogs; four years earlier, GPT-2 could barely string together semi-plausible sentences. Now we are rapidly saturating all the benchmarks we can come up with. And yet this dramatic progress has merely been the result of consistent trends in scaling up deep learning".


And what is the biggest obstacle? Lack of (further) data needed to train LLM models on.. even all the data available on the Internet is not enough..

19 wyświetleń0 komentarzy

Ostatnie posty

Zobacz wszystkie
bottom of page