Show HN: Open source alternative to Perplexity Comet
browseros.comHey HN, we're a YC startup building an open-source, privacy-first alternative to Perplexity Comet.
No invite system unlike bunch of others – you can download it today from our website or GitHub: https://github.com/browseros-ai/BrowserOS
--- Why bother building an alternative? We believe browsers will become the new operating systems, where we offload much bunch of our work to AI agents. But these agents will have access to all your sensitive data – emails, docs, on top of your browser history. Open-source, privacy-first alternatives need to exist.
We're not a search or ad company, so no weird incentives. Your data stays on your machine. You can use local LLMs with Ollama. We also support BYOK (bring your own keys), so no $200/month plans.
Another big difference vs Perplexity Comet: our agent runs locally in your browser (not on their server). You can actually watch it click around and do stuff, which is pretty cool! Short demo here: https://bit.ly/browserOS-demo
--- How we built? We patch Chromium's C++ source code with our changes, so we have the same security as Google Chrome. We also have an auto-updater for security patches and regular updates.
Working with Chromium's 15M lines of C++ has been another fun adventure that I'm writing a blog post on. Cursor/VSCode breaks at this scale, so we're back to using grep to find stuff and make changes. Claude code works surprisingly well too.
Building the binary takes ~3 hours on our M4 Max MacBook.
--- Next? We're just 2 people with a lot of work ahead (Firefox started with 3 hackers, history rhymes!). But we strongly believe that a privacy-first browser with local LLM support is more important than ever – since agents will have access to so much sensitive data.
Looking forward to any and all comments!
> --- How we built? We patch Chromium's C++ source code with our changes, so we have the same security as Google Chrome. We also have an auto-updater for security patches and regular updates.
So you rebuild your browser on every Chromium release? Because that's the risk: often changes go into Chromium with very innocent looking commit messages than are released from embargo 90 days later in their CVE reference
Good question, so far we have been building on top of chromium release that Google Chrome is based on.
This is similar to the chrome extension nanobrowser. https://github.com/nanobrowser/nanobrowser
I would prefer this as a browser extension, not as its own browser application.
We would've preferred to build this as browser extension too.
But we strongly believe that for building a good agent co-pilot we need bunch of changes at Chromium C++ code level. For example, chromium has a accessibility tree for every website, but doesn't expose it as an API to chrome extension. Having access to accessibility tree would greatly improve agent execution.
We are also building bunch of changes in C++ for agents to interact with websites -- functions like click, elements with indexes. You can inject JS for doing this but it is 20-40X slower.
Would this be possible for Firefox?
IIRC, Firefox's web extension API does not provide access to accessibility tree as well.
Could you upstream that change in order to make it an extension in the future? I think people would not value it any less.
We don't mind upstreaming. But I don't think Google Chrome/Chromium wants to expose it as an API chrome extensions, if not they would've done this long time ago.
From Google's perspective, extension are meant to be lightweight applications, with restricted access.
I'm not really interested in AI agents for my webbrowser, but it would be pretty cool to see a fork of chromium available that, aside from being de-googled, relaxes all the "restricted access" to make it more fun to modify and customize the way you guys are. Just a thought, may be more of a market for the framework more than the product :)
See Sciter. A very cool, super lightweight alternative to Electron, but unfortunately it seems like a single developer project and I could never get any of the examples to run.
https://sciter.com/
Yes, we want to do this too! We'll expose much more richer APIs.
What use-cases do you have in mind? like scraping?
I mean you could build the agent with a first principles understanding of the DOM instead of just hacking together with the accessibility tree
We had this exact thought as well, you don't need a whole browser to implement the agentic capabilities, you can implement the whole thing with the limited permissions of a browser extension.
There are plenty of zero day exploit patches that Google immediately rolls out and not to mention all the other features that Google doesn't push to Chromium. I wouldn't trust a random open source project for my day-to-day browser.
Check out rtrvr.ai for a working implementation, we are an AI Web Agent browser extension that meets you where your workflows already are.
Is BrowserOS-OS on the roadmap?
(Will you ever make a better FydeOS, or if you're laser-focused, perhaps be open to sharing some with them, so they could?)
Would love to see this show up on homebrew!
Oooh, that's a nice idea! We'll look into doing that!
Making a homebrew recipe is super easy, and you can definitely find an example to draw from that's "shaped" like your app. Highly recommend.
Congrats!
How are you planning to make the project sustainable (from a financial, and dev work/maintenance pov)?
Thank you!
plan is to sell licenses for Enterprise-version of browser, same as other open-source projects.
my guess is it's just an electron app or chromium wrapper with an ollama wrapper to talk to it (there are plenty of free open source libs to control browsers).
We are a chromium "wrapper"
But we are much more performant than other libs (like playwright) which are written in JS, as we implement bunch of changes at chromium source code level -- for example, we are currently implementing a way to build enriched DOMtree required for agent interactions (click, input text, find element) directly at C++ level.
We also plan to expose those APIs to devs.
No wireless. Less space than a Nomad.
“Just” is a four-letter word :)
When someone in their infinite wisdom decides to refactor an api and deprecate the old one, it creates work for everyone downstream.
Maybe as an industry we can agree to do this every so often to keep the LLMs at bay for as long as possible. We can take a page out of the book of the maintainers of moviepy for shuffling their apis around, it definitely keeps everyone on their toes.
You don’t have to guess, it’s open source
This is very exciting given the rumor that OpenAI will be launching a (presumably not open source) browser of their own this summer. I've joined your Discord, so will try it soon and report back there. Congrats on launching!
Thank you!
Browser wars have begun.
> that OpenAI will be launching a (presumably not open source) browser of their own this summer.
For sure, won't be open-source. I bet in some parallel world, openAI would be non-profit and actually open-source AI :)
Do you have any benchmarks to share like Halluminate's Web Bench?
We working on it! Should have pretty soon!
Whats the roadmap looking like for Linux?
I don't have Mac or Windows.
this is on our radar, we plan to have it ready by early next week!
still a team of 2 people, so bunch things on our plate.
> our agent runs locally in your browser (not on their server)
That's definitely a nice feature. Did you measure the impact on laptop battery life in a typical scenario (assuming there is such a scenario at this early stage)
The agent running by itself shouldn't impact battery life, it is similar to a lightweight chrome extension and if you think about it, it's an agent browsing the web like human would :)
If you run LLMs locally (using Ollama) and use that in our browser, that would impact battery life for sure.
This looks like a great project.
What are the system requirements? And shouldn't they be listed on your website?
we support Mac (apple silicon and intel) and Windows.
hardware requirements are minimal, same as Google Chrome, if you BYOK API keys for agents and are not running LLMs locally.
What is the default BrowserOS model? Is it local, and if so, what inferencing server are you using?
The default model is Gemini.
You can bring your own API keys and change the default to any model you local.
Or better run model locally using Ollama and use that!
The default is a remote Gemini call?
So would this or any AI browser go out and fetch a list of the best deals for my trip to Iceland? After Show me all the options it has found for flights, hotels, car rentals and show cheapest/best prices with all details (fly out of and into with times) to even allow me to pay for each item on same page I asked it to do so? As well it could group the overall best deal with details and then i can just click to pay instantly and or make some edits.
We just started cooking, very soon you should be able to do this!