“The current climate in AI has so many parallels to 2021 Web3”

  • Something can be both technically incredibly impressive, genuinely useful and yet at the same dramatically over-hyped. That may be the case for AI today although the range of possible outcomes makes the latter point genuinely unclear.

    The difference for me is that Web3 was never shown to be at all useful.

  • This is a lazy tweet.

    ChatGPT is trivially useful in a lot of cases...The other day I asked it to write marketing copy for a project and it wrote _better_ marketing copy than I could have written in an hour. On another project, I spent 10 minutes integrating the OpenAI library and was programmatically receiving incredible results with almost no effort.

    The nature of predicting the future means that there will be periods of overconfidence. In the case of Web3 (taken to mean blockchain/crypto/digital coins), the overconfidence was fueled by a core ponzi scheme combined with truly extraordinary returns for early speculators. But AI has no ponzi scheme attached to it so the comparison breaks down. My uncle cannot gamble his retirement on a mysterious promise of ĂĽberwealth from a man with wild hair and no financial experience.

    AI companies will have lots of false starts but 2022 was a transformational year and we are only getting started.

  • Having lived through hype wave after hype wave after hype wave in tech over the last forty-fifty years, the commentary about the behaviour of VCs resonated. I wonder if part of why VCs back so many copycat companies in hype cycles is a structural incentive to just invest money in things that are getting press, raising the VCs profile and attracting more deal flow and limited partners.

    This led to idle speculation: If it was possible to short early-stage startups that VCs were backing, there would be as much incentive for the media to discuss a startup’s shortcomings and vapourware promises as there is to repeat their breathless braggadocio.

  • The difference for me personally: Web3/NFT/Crypto: what? why? DALL-E/ChatGPT: wow!

  • Every day I see one or two new thought pieces on how AI is not actually good/capable/impressive or how AI is overhyped. I have seen zero thought pieces on how AI is amazing and hyped. Meanwhile a lot of people are having fun with, or doing useful things with, StableDiffusion and ChatGPT.

    The negative takes are mostly correct in all the limitations they talk about, but what they miss is how amazing these things are despite these limitations. These things are remarkably simple and limited yet they can generate realistic photos of myself in places I've never been, wearing clothes I've never worn, doing things I've never done, with a quick text sentence. Or nearly pass the bar exam.

    On top of that a lot of the limitations have straightforward ways to address, many of which are already in progress. It is going to get really interesting. StableDiffusion knows nothing about the images it is produces, its just repeated denoising with image targets. It doesn't really understand anything about your text either, its just matching up tags. But both of those things can easily change. Put a big language model in front of it to better understand text. Already variants of these image models have depth information. Next up 3d object information, maybe next models of physics so it can understand how things would actually work in the scene, and so on.

    going to get wild.

  • Generative AI is good for SEO copy because it’s an AI talking to an AI. Google looks at text generated by ChatGPT and is like “wow, that’s exactly how I think about things.” They were trained on much of the same content: the web.

    The biggest challenge for generative AI is its willingness to make things up. It’s fine when you’re playing around. Not so fine when you’re expecting it to actually help you in a real way.

    I suspect this is why Google has not debuted such an interface despite literally decades of work on AI. You have to be able to bolt a “truth filter” onto the AI, which seems difficult.

  • As far as consumer interest goes, the problem with Web3 is it sells the means rather than the end. Nobody cares if your Twitter/Substack/Spotify Web3 alternative is decentralised or you own your own data on the platform. To win it has to be immediately useful and/or better than alternatives.

    Content generation AI is so obviously useful to the majority of people and it does not require an understanding of how it works in order to be impressed by it.

  • "We tend to overestimate the effect of technology in the short term and underestimate it in the long term."

    The comparison with web3 is very excessive though. This AI stuff is at least somewhat actually useful. Web3 was a gigantic billion dollar bubble that produced very little in the way of things that are useful for any purpose, even playing around. It's one of the most vapid bubbles in history outside pure financial instrument bubbles.

  • The problem is people and their expectations after a sudden technology jump:

    - 15 years ago we got smartphones and people expected laptops and desktop computer to go away (within the next few years)

    - we got web3 and people expected suddenly banks, tradfi, etc. to go away

    - we got ChatGPT 2 months ago and now what? Google dead? School / university irrelevant? Threat to humanity?

    There will be incremental improvements for AI, it will slowly become more part of our life, like it happens for these other techs

  • I actually think AI is much more like VR or 3D-cinema tech than Web3: it sounds like it might be a game changer, but nobody actually likes the new medium that much after the initial cool factor. AI generated images are much more impressive though, and some things will be unique.

    In the next decade, I see AI to tackle a special category of problems: those that shouldn't be solved, or else the system as a whole gets worse -- chat bots for customer support, absurd amount of content creation. There's much more content today than 20 years ago, and yet my enjoyment has gone down. If I were a member of congress, I'd be surely thinking about ways to slow this down.

  • My TikTok stream has been hijacked by pseudo-tech grifters who are promising the usual $1k/day by selling GPT-generated books on Amazon. Go figure.

  • I too have lived through many hype cycles. I jumped on one in 1992 and created one of the first ISPs in Michigan. That changed my career path from automotive to tech. It was an exciting time as thousands of ISPs were launched.

    I completely ignored the hype around blockchain, NFTs, etc. I even block anyone who says they are a blockchain expert on Linkedin.

    But I am all in on leveraging LLM for new business use cases. My mind is blown at what we can do with DaVinci 3.5 and looking forward to evaluating 4.0. ChatGPT is a demonstrator. DaVinci is ushering in the next wave of innovation.

  • What I find interesting is there has been a (percieved) massive change in the popular opinion re "AI". If you got an AI story on HN 6 months ago, there were all kinds of "it's just statistics" comments (also incorrect). Now the mood has changed and all the laypeople are gushing about how great it is.

    Otoh, my impression is people who have been involved in ML for a while didn't have any sea change in their opinion of the technology based on the recent advances. These were predictable, but cool, extensions of things that were already known, and represent a fundamental advance in polish and marketing, rather than technology.

    My point is that the public discourse is now mostly dominated by people looking to profit from hype, not people who actually have experience in the technology, which is of course going to lead to a web3 type feel

    Incidentally, below is my prediction for 2023 from new year's eve. I didn't think it would start becoming apparent so quick: https://news.ycombinator.com/item?id=34197033

  • Yep. The value/utility of the underlying technology doesn't really matter. The scammers will just switch to using whatever buzzwords are most hip and eventually people will associate the buzzword more with the scammers than the technology itself. This was taken to the extreme with cryptocurrency post-2015 where most people never even had an actual interaction with cryptocurrency but believe they know what it is from all the tangentially related scams they heard ads or news about.

    The utility of AI can be real, just as the utility of bitcoin is, but it'll be drowned under things like "Quantum AI trading platform" or "NFT"s in the public perception.

  • Can AI predict when people will start writing longer form articles on a platform that doesn't artificially restrict you to paragraphs?

  • I still think social authentication and S3 replacements are valid use cases for web3.

    as for GPT I think it is revolutionary. And it is the first chatbot that I prefer to work with over google. Way easier to use than google.

    If it becomes a paid service(assuming the pricing is right) I feel it's incentives align better than googles with mine.

    Where things get messed up is recommending products and services. If it can stick to a no pay to rank type of service than it would revolutionize the economy.

    No assistant is perfect but seems to do better then a lot of human assistants.

    Will it be perfect in the future no, but I think if it can offer some kind of confidence factor to it's answers that would go a long way.

  • This is a dangerous assertion. Web3 never had any “there there” No coherent use case, no value add. Smart contracts? Maybe. No proposed use case actually requires or benefits from blockchain. The currencies were always blatant speculation. A cabdriver in OKC recently told me he was “invested” in diesel coin. A) wtf is that and B) in what way is that an investment (you know where you dedicate capital to a value engine and when value is created you get out more than you put in)

    AI as it exists TODAY has the potential (with a bit of prompt engineering and a free account) to assist everyone in their jobs. It will disrupt the software industry, art, writing, education, law, science.

    Specialized AI assistants already exist and work startlingly well. It can write at a college level. It can code at a college level. It can learn specialized knowledge worker skills in a trivial amount of time. (Law for example)

    It’s ok to not be optimistic. But if you discount it entirely you’re in for a bad time in the coming year, three years, five years.

    Actually no, do whatever you want. Ai will come as a surprise to something like 6 billion people. There’s no harm to me in you being part of that group. And frankly the existence of different opinions about the future is a great hedge every society makes.

  • I am really looking forward to AI that can help us solve hard biology and engineering issues (say nuclear reactor design, materials science) practically. Or an AI that will help solve coordination problems by telling us exactly where to compromise and how to negotiate so that everyone will agree to say relax zoning regulations. I know there are things like AlphaFold and probably lots of proprietary things in niche industries I'll never hear about because no mainstream media is going to cover that. Are there such examples?

    I find the likes of ChatGPT wonderful, but in the end image and copy generation seems like a very first world/"content creator" use case that won't help us solve critical problems.

  • > “The fact that investment is being driven by pure hype, by data-free narratives rather than actual revenue data or first-principles analysis. The circularity of it all -- hype drives investment which drives hype which drives investment. The influx of influencer engagement bait.”

    Think this cynical. author good to see ads or marketing using hype for clicks. people do stuff to make money on latest trend, just nature.

    but me dont know anything productive in web3. it hype through all way. English not me first language. writing use almost feel magic to have chatgpt clean and rewrite my posts. also used for coding examples. dont think this equal to web3. maybe need time to mature to billion dollar scale but me wouldnt bet money against

  • I find AI products incredibly useful and I save a lot of time by using them; I have paid subscriptions to MJ and OpenAI and I use them frequently.

    On the contrary, web3 is nothing more than empty promises.

  • Crooks and swindlers will find there way into anything where big money is likely to be hyped. Unlike web3 I actually see products being built from LLM.

  • Key difference I think is that Ai while overhyped will definitely have an enduring impact at least partially. With web3 etc that part was less clear

  • Seems like it has more parallels to the AI Winter of the 1970s. https://builtin.com/artificial-intelligence/ai-winter

    In particular the collapse of faith in the future of Tesla’s full self driving, and self-driving cars generally, has been palpable.

  • What's annoying is that I have seen not one realistic idea come out of this. This is not some kind of alien technology, it is a concrete algorithm. You cannot just say 'this will revolutionize everything' and then refuse to elaborate. It is hard to integrate this into an existing project, and it does not solve the agency issue that we already had, which is that AI can talk, but it cannot make decisions for you. So more emasculated chat bots that nobody wants to talk to. And how big a market is copy editing really? To publish something based on chatgpd you still need to put in a ton of work, because it does not check its sources and makes stuff up. So how is AI really better than 5 years ago other than that it's a better writer with a better style? That wasn't the problem 5 years ago, and it is not today.

  • Maybe I slept through it, but what was Web3 in 2021?!

  • People overestimate what's possible short-term and under-estimate what's possible long term.

    Current AI models have three main limitations:

    - rapid skill acquisition. Example: someone invents a new programming language. Something like Elm or Rust. Humans can start using it right after reading a "quick start" page and few tutorials from the authors of the new language. How much training data will GPT-style models need to start using that language? A lot, like output of hundreds or thousands of people. This needs to go down by a factor of 10x to 100x to match humans.

    - agency, or taking actions guided towards a goal. Example: Can you ask GPT-like model to book a flight ticket for you? Help it help you learning Photoshop? New IDE? Test your latest app or indie game and find bugs? No. The ability of current models is still not good enough to be useful to an average person with regard to interacting with outside world.

    - acting in the physical world. Example: An average human can learn to drive enough to pass a driving test in the order of 100 hours during a driving course. How far away are we from a system that can control humanoid-style robot to learn to drive in the order of 100h? Currently, we can't do it with billions of dollars with specially designed hardware. Using humanoid robot to control a car is not even useful as a benchmark for the state of the art machine learning systems.

    IMO the currently existing systems like ChatGPT or Stable Diffusion are worth all put together in the range of $10B to $100B in the next 3-5 years.

    Future systems that will address all 3 limitations mentioned above may be literally the whole reachable universe changing if we decide to build self-replicating space probes (Von Neumann probes). We know that there are physical systems that are not very intelligent but capable of exponential growth like viruses or bacteria (humans too).

    The main limitation of biological systems is their adaptability, especially to lack of water. If robots can build other robots avoiding bottlenecks of human intellectual and manual labor then the robots are limited only by resources and energy available. We have plenty of both on Earth and Solar system is full of it.

    Also, I'm just talking about human-like abilities. All of that is possible without involving concepts like superintelligence. Bacteria are not very smart, but they can multiply exponentially.

    The spread of possibilities is enormous.

    All of that hinges on your timelines on the 3 above mentioned limitations. It's like predicting when atomic bomb will be possible. It may not happen for decades, or there may be right this very moment someone with an idea that will make it all possible.

  • I find it hard to overhype the immense progress we had in the last two years in AI. All that hype is deserved.

  • I don't agree with this, like strongly.

    What is web3 actually? I don't think there is a definition people can agree on. So even bashing web3 as if it is anything other than at best a vaporware, worst case a scam, is fruitless and not grounded in reality.

    Let's talk about the current climate in AI part. From his tweet, he seems to point to Sam Altman's statement that GPT-3.5+ is going to 'civilization transformation'. Well, if you follow Altman's past statements, he likes to make grandiose statements like this, and at the end of day, it is more of his personal style of speech than anything else.

    So let's back to the comparison of web3 vs GPT style AI agents. I would argue, the latter is indeed, partially already a reality. Imagine if you have a multiple modular in-context learning agents, then here is the opportunity: A machine that is programmable through natural language and examples/demonstrations. The level of automation potential is going to insane, even scary. What if we hook it up with some robotic arms? If someone makes this work, we will have a factory that can make many many things, at the same time, in the same place, with little human involvement. This, of course, is going to fundamentally change wide range of industries, or capitalism itself and will have geopolitical implications.

  • Powell needs to keep raising rates

  • and yet AI has actual use cases %)

  • If they mean as an investment or game in the tech stock casino then I'd agree

  • [dead]

  • [dead]

  • It's more or less a grifting industry at this point. The topic changes as people grow wise to the con, but the tactics stay the same.

  • In this analogy, would OpenAI be the Ethereum-equivalent of AI space - actually innovative tech and some real world utility with a strong team of developers?