Google has hell of a moat because they are continuously crawling and indexing the Web for the last 27 years. They have tons of data and information that can be used for training of their AI.
At this point Google has zettabytes of data and there is no efficient way for them and their algorithms to realize what is going on exactly....but I sometimes still wonder how things like this happen.
When speaking about ranking I think they should enforce PageRank more because I don't see any other algorithm that is more simpler and more powerful than this one.
That one happened because Gemini read it on Reddit and hadn't figured it was a joke. It was only up there from its launch through till about four days later when someone removed it.
They are getting better at fact checking, Google apprently working on Googling the answer - Hassabis at Davos the other day: https://youtu.be/ICv03VysLaE?t=826
It already exists in Gemini and it existed in Bard....now it's called "Double-check response". Here is pic of it in action: https://i.imgur.com/aIISzt5.png
LinkedIn at least shows up prominently in Google search results, not sure about IG. And Google still has some majored walled gardens of its own like Docs, Gmail, and YouTube.
Google picks up some LinkedIn content, but I would venture it’s a very small slice of what is out there. Just doing a google search for several posts I see on my feed, and very few if any shows up. Same for IG (which makes sense as a lot of profiles are private)
I wouldn't be surprised if LinkedIn is eventually excluded from training datasets. It's inclusion trains the LLMs to generate slop, which the providers will eventually be looking to reduce.
The other thing Google has, and maybe Meta, is good monetization technology. I'm not sure how monetization of AI will work but I can see if you ask about flights say it'll link to Google's holiday booking stuff etc.
For consumers, i think we could see local first AI boxes (possibly just our phones) + subscription based plan to integrate with set of API providers for info and actions with paid add-ons and updates. The moat might be integration or deals with a set of “atomic”/foundational API providers to invoke results (without a [headless] browser).
E.g. book a flight will be thru APIs from airlines partnering with consumer AI box companies with some “best pricing” layer on top so user can book optimal flights, user can generate high quality infinite doomscrolling content on demand, provided by TikTok with ad and ad-free tiers. Law enforcement will crack down on unregistered, “dark” boxes and require API providers to talk with registered boxes only so dark boxes will have to rely on shady, limited APIs.
The VCs thinked that the scaling law may build a moat for very large computing powers like Google and OpenAI, and VCs rushed into the area the foundation models.
The emerging of DeepSeek V3 and DeepSeek R1 somehow make the moat vulnerable with their much smaller training cost. The fact verified the claim of Google.
That's mostly just from cliques of friends hiring each other back and forth, forever ))<>((. OpenAI tries to win by hiring from Anthropic. Anthropic tries to win by hiring from OpenAI. Meta tries to win by hiring from Google. Google tries to win by hiring from Meta. In the end, the net set of engineers and researchers at these companies doesn't change, but median comp among the in-crowd goes up 50%. It's a great gig if you can get it.
The reason these kind of cliques form is that experts enjoy talking to and working with other experts. I know multiple people who've been able to enter them from outside, they're not closed circles.
That's true enough. The problem is that they quickly turn into echo chambers where "is an expert" means "shares our opinions." Once these communities become established (in academia or industry), the appetite for novelty and innovation drops sharply.
Moat to what? My Google use has dropped significantly now LLMs are integrated into my IDEs. It’s not zero, but my use has dropped by more than 50%
My expectations are changing too, just this weekend I typed into google
“is Monday a public holiday in Auckland”
And their AI answered me and gave me an enthusiastic and utterly useless paragraph about how some holidays are sometimes on mondays.
I thought then, Google is DONE
I think they are only surviving on momentum, and because they are so large, they still have momentum, but in the face of automated thought accelerating innovation even an enormous amount of momentum can only carry you so far.
People use search engines so differently. It would never occur to me to type anything like this into Google. I would type "New Zealand bank holidays" and expect to find a list (ideally on an official-looking website), and then judge for myself whether Monday was a holiday.
I used to type "New Zealand Bank Holidays" but now my expectations have changed - my expectations of what a search engine can be has evolved because I have tools in other areas of my life that are smarter and getting better, and I expect Google to get smarter and be better. I dont want to query Google the same way as I did in 2015, because it's 2025.
The answer I want is NOT a list of public holidays, thats an intermediate. The answer I want is a single word, "Yes" or "No" that's what I want.
I can ask an LLM to refactor my codebase and then help me write a GLSL shader to display electron densitry grids and it gets it mostly right. The difficulty of getting such a complex query mostly right is WAY harder than answering my basic /basic/ question about public holidays.
Asking "Is Monday a public holiday in Auckland" is a BASIC query I expect google to be able to answer now.
I'm an engineer and have been a dev for 20 years, I know that the holiday query involves a realtime lookup of the current date and time, and possibly another query to get the holidays, so it might not be as straight forward as a forward pass through a deep net, but this is GOOGLE we are talking about, what the HELL have they been doing for the last 5 years?
The thing that really irked me about the response was that the one it gave me was SO worthless, it gave me a very bad impression about Google instantly because I thought "This thing is dumb as hell", no AI response and just a list of holidays would have been better.
>People use search engines so differently. It would never occur to me to type anything like this into Google. I would type "New Zealand bank holidays" and expect to find a list (ideally on an official-looking website), and then judge for myself whether Monday was a holiday.
I used to always use searches like that, but honestly you mostly get better results now using human language type searches because they've optimized the site to work better for those. I've hard to untrain years of search engine usage and force myself to consider how a random joe would interact with it.
>And their AI answered me and gave me an enthusiastic and utterly useless paragraph about how some holidays are sometimes on mondays.
The AI summaries are almost universally wrong or incomplete in an important way for everything I've search for lately. It's honestly devaluing the site in ways that might be hard to recover from.
I think AI capabilities perception in general is being greatly damaged by the Google search AI summary. Whatever model they use is so cheap and crappy, yet I can't opt out of it or even get my eyes to skip the box... Claude or Perplexity or whatever can comfortably and concisely answer questions about Auckland holidays without hallucinating, yet the Google search AI thinks you can eat rocks and put glue on pizza, and I see people trot similar examples out all the time to prove that "AI is dumb".
I just typed “is Monday a public holiday in Auckland” into Google and the second link is full list of holidays from www.govt.nz. I don't think Google is done. I still use it and it has ~79% share on desktop. Most of the rest seem to be people who have bing as default on Windows and haven't changed it. 0.87% use Duck Duck go which is probably the highest on the list that people choose as a better search engine rather than because it came installed as default.
I'm kind of surprised Google has dominated so long but I guess it's lots of money -> hire lots of bright people -> make innovations, that keeps them up there. That may work for AI too? Dunno.
I clicked Copilot button in Bing with this search term and it said the exact answer I was looking for, in plain English:
"Yes, *Monday, January 27th* is a public holiday in Auckland. It's the *Auckland Anniversary Day*, which is celebrated on the Monday nearest to January 29th each year.
Do you have any special plans for the holiday?"
It cited 2 sources which turned out to be correct (this time).
Seems far more efficient than googling which nowadays gives you an entire first page of ads. Although I'm not sure to what extent hallucination issues have been sorted out.
Frankly I hope this pops the AI bubble back to reality. We don't really need a 500B investment in AI. We do need a 500B investment in cleaning up our emissions. Something AI would only make worse.
But nobody is going to make monopoly profits from reducing emissions. It just costs money. Things that benefit everyone don't get nearly as much effort as things that benefit a single billionaire.
Fighting global warming and investing in clean tech is more important than funding ClosedAI and friends and making hyperteers and enshittificationers even richer than they already are.
Google has hell of a moat because they are continuously crawling and indexing the Web for the last 27 years. They have tons of data and information that can be used for training of their AI.
The result of that is suggesting to put delicious glue on a pizza. Yummy https://www.theverge.com/2024/6/11/24176490/mm-delicious-glu...
At this point Google has zettabytes of data and there is no efficient way for them and their algorithms to realize what is going on exactly....but I sometimes still wonder how things like this happen.
When speaking about ranking I think they should enforce PageRank more because I don't see any other algorithm that is more simpler and more powerful than this one.
That one happened because Gemini read it on Reddit and hadn't figured it was a joke. It was only up there from its launch through till about four days later when someone removed it.
Yea, AI still can't figure out jokes, irony, metaphor etc. Perplexity also quotes Reddit very often, idk how smart is that.
They are getting better at fact checking, Google apprently working on Googling the answer - Hassabis at Davos the other day: https://youtu.be/ICv03VysLaE?t=826
It already exists in Gemini and it existed in Bard....now it's called "Double-check response". Here is pic of it in action: https://i.imgur.com/aIISzt5.png
It does not allow us to see the links behind the facts. Thats dismaying.
Increasing amounts of data is also becoming hidden behind walled gardens like LinkedIn, Instagram and Slack/group chats.
LinkedIn at least shows up prominently in Google search results, not sure about IG. And Google still has some majored walled gardens of its own like Docs, Gmail, and YouTube.
Google picks up some LinkedIn content, but I would venture it’s a very small slice of what is out there. Just doing a google search for several posts I see on my feed, and very few if any shows up. Same for IG (which makes sense as a lot of profiles are private)
I wouldn't be surprised if LinkedIn is eventually excluded from training datasets. It's inclusion trains the LLMs to generate slop, which the providers will eventually be looking to reduce.
I tested 1000 AI tools in one day…
And here are 3 things I learned.
The other thing Google has, and maybe Meta, is good monetization technology. I'm not sure how monetization of AI will work but I can see if you ask about flights say it'll link to Google's holiday booking stuff etc.
I remember this coming across as a little salty, but in hindsight I guess it's prescient.
For consumers, i think we could see local first AI boxes (possibly just our phones) + subscription based plan to integrate with set of API providers for info and actions with paid add-ons and updates. The moat might be integration or deals with a set of “atomic”/foundational API providers to invoke results (without a [headless] browser).
E.g. book a flight will be thru APIs from airlines partnering with consumer AI box companies with some “best pricing” layer on top so user can book optimal flights, user can generate high quality infinite doomscrolling content on demand, provided by TikTok with ad and ad-free tiers. Law enforcement will crack down on unregistered, “dark” boxes and require API providers to talk with registered boxes only so dark boxes will have to rely on shady, limited APIs.
They have data if only there were no managers seeking to impact farm their stats using researchers for promo.
This is exactly it. I left Google in 2013 after working there for 3 years because I could see this happening.
I only imagine how much worse it is now.
Hot take but 99% of all profitable businesses have no moat, and there’s absolutely nothing wrong with that. You can still make lots of $$$.
Moats matter the most to investors and maybe the C suite. For everyone else, it’s just an intellectual exercise
> Hot take but 99% of all profitable businesses have no moat, and there’s absolutely nothing wrong with that. You can still make lots of $$$.
Can you make $157b without a moat? Or, anything close to that? That's the more relevant question at hand.
Not only there's nothing wrong, but it is a good thing.
It matters to the companies whose business model is to make money because they have moat.
The VCs thinked that the scaling law may build a moat for very large computing powers like Google and OpenAI, and VCs rushed into the area the foundation models.
The emerging of DeepSeek V3 and DeepSeek R1 somehow make the moat vulnerable with their much smaller training cost. The fact verified the claim of Google.
I think there are going to be no real moats here at all since all it takes is access to chips and money to make this happen.
And expertise, which is where you have to pay for quality, something that makes corporations in the West have an existential crisis.
The expertise comes first from innovation out of universities, and then refinement out of corporations.
Open models aren't going away anytime soon
Not really sure what you’re referring to here. There’s multiple AI companies around who routinely pay expert ICs close to a million a year.
That's mostly just from cliques of friends hiring each other back and forth, forever ))<>((. OpenAI tries to win by hiring from Anthropic. Anthropic tries to win by hiring from OpenAI. Meta tries to win by hiring from Google. Google tries to win by hiring from Meta. In the end, the net set of engineers and researchers at these companies doesn't change, but median comp among the in-crowd goes up 50%. It's a great gig if you can get it.
The reason these kind of cliques form is that experts enjoy talking to and working with other experts. I know multiple people who've been able to enter them from outside, they're not closed circles.
That's true enough. The problem is that they quickly turn into echo chambers where "is an expert" means "shares our opinions." Once these communities become established (in academia or industry), the appetite for novelty and innovation drops sharply.
Huggingface just released their r1 reproduction. This definitely conforms with my suspicions that LLMs have very little to no margin inherently.
"it's fine, we'll just use AI to replace engineers"
"all it takes is billions of dollars"
Well ... there's your moat.
Google is the moat
Moat to what? My Google use has dropped significantly now LLMs are integrated into my IDEs. It’s not zero, but my use has dropped by more than 50%
My expectations are changing too, just this weekend I typed into google
“is Monday a public holiday in Auckland”
And their AI answered me and gave me an enthusiastic and utterly useless paragraph about how some holidays are sometimes on mondays.
I thought then, Google is DONE
I think they are only surviving on momentum, and because they are so large, they still have momentum, but in the face of automated thought accelerating innovation even an enormous amount of momentum can only carry you so far.
> “is Monday a public holiday in Auckland”
People use search engines so differently. It would never occur to me to type anything like this into Google. I would type "New Zealand bank holidays" and expect to find a list (ideally on an official-looking website), and then judge for myself whether Monday was a holiday.
Google finds me this as the first result: https://www.govt.nz/browse/work/public-holidays-and-work/pub...
What would be the point of introducing any AI into this interaction?
I used to type "New Zealand Bank Holidays" but now my expectations have changed - my expectations of what a search engine can be has evolved because I have tools in other areas of my life that are smarter and getting better, and I expect Google to get smarter and be better. I dont want to query Google the same way as I did in 2015, because it's 2025.
The answer I want is NOT a list of public holidays, thats an intermediate. The answer I want is a single word, "Yes" or "No" that's what I want.
I can ask an LLM to refactor my codebase and then help me write a GLSL shader to display electron densitry grids and it gets it mostly right. The difficulty of getting such a complex query mostly right is WAY harder than answering my basic /basic/ question about public holidays.
Asking "Is Monday a public holiday in Auckland" is a BASIC query I expect google to be able to answer now.
I'm an engineer and have been a dev for 20 years, I know that the holiday query involves a realtime lookup of the current date and time, and possibly another query to get the holidays, so it might not be as straight forward as a forward pass through a deep net, but this is GOOGLE we are talking about, what the HELL have they been doing for the last 5 years?
The thing that really irked me about the response was that the one it gave me was SO worthless, it gave me a very bad impression about Google instantly because I thought "This thing is dumb as hell", no AI response and just a list of holidays would have been better.
>People use search engines so differently. It would never occur to me to type anything like this into Google. I would type "New Zealand bank holidays" and expect to find a list (ideally on an official-looking website), and then judge for myself whether Monday was a holiday.
I used to always use searches like that, but honestly you mostly get better results now using human language type searches because they've optimized the site to work better for those. I've hard to untrain years of search engine usage and force myself to consider how a random joe would interact with it.
Talk to it exactly like a person with your voice and no crafted search term googlese and get a correct expert answer?
Ker-ching.
>And their AI answered me and gave me an enthusiastic and utterly useless paragraph about how some holidays are sometimes on mondays.
The AI summaries are almost universally wrong or incomplete in an important way for everything I've search for lately. It's honestly devaluing the site in ways that might be hard to recover from.
I think AI capabilities perception in general is being greatly damaged by the Google search AI summary. Whatever model they use is so cheap and crappy, yet I can't opt out of it or even get my eyes to skip the box... Claude or Perplexity or whatever can comfortably and concisely answer questions about Auckland holidays without hallucinating, yet the Google search AI thinks you can eat rocks and put glue on pizza, and I see people trot similar examples out all the time to prove that "AI is dumb".
I just typed “is Monday a public holiday in Auckland” into Google and the second link is full list of holidays from www.govt.nz. I don't think Google is done. I still use it and it has ~79% share on desktop. Most of the rest seem to be people who have bing as default on Windows and haven't changed it. 0.87% use Duck Duck go which is probably the highest on the list that people choose as a better search engine rather than because it came installed as default.
I'm kind of surprised Google has dominated so long but I guess it's lots of money -> hire lots of bright people -> make innovations, that keeps them up there. That may work for AI too? Dunno.
I clicked Copilot button in Bing with this search term and it said the exact answer I was looking for, in plain English:
"Yes, *Monday, January 27th* is a public holiday in Auckland. It's the *Auckland Anniversary Day*, which is celebrated on the Monday nearest to January 29th each year.
Do you have any special plans for the holiday?"
It cited 2 sources which turned out to be correct (this time).
Seems far more efficient than googling which nowadays gives you an entire first page of ads. Although I'm not sure to what extent hallucination issues have been sorted out.
Well google was two clicks and I have an ad blocker. I guess it depends what you are after - I use AI too.
MySpace was the Moat.
Yahoo was the Moat.
Frankly I hope this pops the AI bubble back to reality. We don't really need a 500B investment in AI. We do need a 500B investment in cleaning up our emissions. Something AI would only make worse.
It didn’t pop it two years ago when it was published/leaked. In fact since then the bubble has only swelled faster.
DeepSeek has recently provided a more concrete demonstration of what this paper laid out in principle, which could impact things.
But nobody is going to make monopoly profits from reducing emissions. It just costs money. Things that benefit everyone don't get nearly as much effort as things that benefit a single billionaire.
Right, that would require social endeavors to solve social problems instead of targeting the next quarter yacht money
Why is this being downvoted?
Fighting global warming and investing in clean tech is more important than funding ClosedAI and friends and making hyperteers and enshittificationers even richer than they already are.