codedokode 8 minutes ago

I don't like this. This could be implemented as a JS library. I believe browsers should provide the minimal API so that they are smaller and easier to create. As for safe alternative to innerHTML, it is called innerText.

evilpie 5 hours ago

We enabled this by default in Firefox Nightly (only) this week.

  • spankalee 4 hours ago

    I'll be very excited to use this in Lit when it hits baseline.

    While lit-html templates are already XSS-hardened because template strings aren't forgeable, we do have utilities like `unsafeHTML()` that let you treat untrusted strings as HTML, which are currently... unsafe.

    With `Element.setHTML()` we can make a `safeHTML()` directive and let the developer specify sanitizer options too.

    • StrauXX 4 hours ago

      Why don't you use DOMPurify right now? It's battle tested and supports configs just like this proposal.

      • spankalee 2 hours ago

        One, lit-html doesn't have any dependencies.

        Two, even if we did, DOMPurify is ~2.7x bigger than lit-html core (3.1Kb minzipped), and the unsafeHTML() directive is less than 400 bytes minzipped. It's just really big to take on a sanitizer, and which one to use is an opinion we'd have to have. And lit-html is extensible and people can already write their own safeHTML() directive that uses DOMPurify.

        For us it's a lot simpler to have safe templates, an unsafe directive, and not parse things to finely in between.

        A built-in API is different for us though. It's standard, stable, and should eventually be well known by all web developers. We can't integrate it with no extra dependencies or code, and just adopt the standard platform options.

      • ffsm8 3 hours ago

        Why would the framework do that?

        The app developers can still use that right now, but if the framework forces it's usage it'd unnecessarily increase package size for people that didn't need it.

CaptainOfCoit 5 hours ago

Really happy to see it, after 25 years (https://www.bugcrowd.com/glossary/cross-site-scripting-xss/) of surviving without it. It always struck me as an obvious missing part of the DOM API, and I still don't know why it took this long time.

But mostly I'm just happy that it's finally here, I do appreciate all the hard work people been doing to get this live.

padjo 4 hours ago

As someone who has dealt with more than my fair share of content injection vulnerabilities over the years this is great to see at last. It’s kinda crazy that this only coming now while other, more cumbersome solutions like CSP have been around for years.

sergeykish an hour ago

So `.setHTML("<script>...</script>")` does not set HTML?

  • xp84 21 minutes ago

    Sounds reasonable enough to me. 99.99% of the times you’re in an actual script, if you mean to execute code, you’d just execute it yourself, rather than making a script tag full of code and sticking that tag into a random DOM element. That’s why the default wouldn’t honor the script tag and there’d be an “unsafe” method explicitly named as such to hint you that you’re doing something weird.

_the_inflator 4 hours ago

Maybe it is then time for having something that is beyond "use strict" at the beginning auf a JavaScript document as one option to use the statement.

I think a config object in which you define for script options like sanitization and other script configuration might be helpful.

After all, there almost always need to be backward compatibility be ensured, and this might work. I am no spec guy, it is just an idea. React makes use of "use client/server", so this would be more central and explicit.

michalpleban 6 hours ago

So is this basically a safe version of innerHTML?

  • intrasight an hour ago

    I'm confused as to why you need a "safe" version if you're the one generating and injecting the HTML.

    • matmo 19 minutes ago

      Isn't this kinda like asking "why does my gun need a safety if I'm the only one consciously pulling the trigger"?

    • evbogue 30 minutes ago

      Why should a web page only have a single person generating and injecting HTML into it?

  • Octoth0rpe 6 hours ago

    Yes, although a slightly more relevant way of putting it would be that it's an inbuilt DOMPurify (dompurify being an npm package commonly used to sanitize html before injecting it).

ishouldbework 5 hours ago

> It then removes any HTML entities that aren't allowed by the sanitizer configuration, and further removes any XSS-unsafe elements or attributes — whether or not they are allowed by the sanitizer configuration.

Emphasis mine. I do not understand this design choice. If I explicitly allow `script` tag, why should it be stripped?

If the method was called setXSSSafeSubsetOfHTML sure I guess, but feels weird for setHTML to have impossible-to-override filter.

  • strbean 5 hours ago

    This is primarily an ergonomic addition, so it kinda makes sense to me to not make the dangerous footguns more ergonomic in the process. You can still assign `innerHTML` etc. to do the dangerous thing.

    • meowface 5 hours ago

      I agree, though I also agree with the parent that the method name is a little bit confusing. "safeSetHTML" or "setUntrustedHTML" or something would be clearer.

      • strbean 4 hours ago

        Idk about that, there's a good argument that the most obvious methods should be the safe ones. That's what juniors will probably jump to first. If you need the unsafe ones, you'll probably be able to figure that out and find them quickly.

      • jfengel an hour ago

        I like React's dangerouslySetInnerHTML. The name so clearly conveys "you can do this but you really, really, really shouldn't".

    • hsbauauvhabzb 5 hours ago

      Ideally this should be called dangerouslySetInnerHTML but hindsight blah blah

  • evilpie 5 hours ago

    If you want to use an XSS-unsafe Sanitizer you have to use setHTMLUnsafe.

  • jmull 5 hours ago

    I guess they are going for a safe default... the idea is people who don't carefully read the docs or carefully monitor the provenance of their dynamically generated HTML will probably reach for "setHTML()".

    Meanwhile, there's "setHTMLUnsafe()" and, of course, good old .innerHTML.

  • wewtyflakes 4 hours ago

    Wouldn't that open the floodgates by allowing code that could itself call `setHTML` again but then further revise the args to escalate its privileges?

ibowankenobi 4 hours ago

The API design could be better. Document fragments are designed to be reused. It should accept an optional fragment key which accepts a document fragment.If not a fragment, throw, if has children, empty contents first.

  • spankalee 4 hours ago

    In what way are document fragments meant to be reused?

    They empty their contents into the new parent when they're appended, so they can't be meaningfully appended a second time without rebuilding them.

    `<template>` is mean to be reused, since you're meant to clone it in order to use it, and then you can clone it again.

redbell 5 hours ago

> This feature is not Baseline because it does not work in some of the most widely-used browsers.

This is interesting, but it appears to be in its early days as none of the major browsers seem to support it.. yet.

AlienRobot 5 hours ago

Great functionality, terrible name.

  • varun_ch 5 hours ago

    I sometimes wonder whether what the DOM APIs could look like in a hypothetical world where we could start over with everything.

  • jonathrg 4 hours ago

    Why? Does it not set the HTML?

    • netsharc 3 hours ago

      It doesn't say "There's a lot of hidden sanitizing stuff inside this method" from the name...

      Something like "setSafeHTML()" would be preferable. (Since it's Mozilla, there should be a few committee meetings to come up with the appropriate name)...

      • hoppp an hour ago

        Well ,could it be safelySetHTML instead of setSafeHTML ?

        The second one could imply the HTML is already safe while the first one is safe way to set html.

        If it's just setHTML then it could imply that don't care if its safe or not.

dzogchen 5 hours ago

Neat. I think once this is adopted by HTMX (or similar libraries) you don't need to sanitize on the server side anymore?

  • dylan604 5 hours ago

    Do you honestly feel that we will ever be in a place for the server to not need to sanitize data from the client? Really? I don't. Any suggestion to me of "not needing to sanitize data from client" will immediately have me thinking the person doing the suggesting is not very good at their job, really new, or trying to scam me.

    There's no reason to not sanitize data from the client, yet every reason to sanitize it.

    • auxiliarymoose 3 hours ago

      If you sanitize on the server, you are making assumptions about what is safe/unsafe for your clients. It's possible to make these assumptions correctly, but that requires keeping them in sync with all clients which is hard to do correctly.

      Something that's sanitized from an HTML standpoint is not necessarily sanitized for native desktop & mobile applications, client UI frameworks, etc. For example, with Cloudflare's CloudBleed security incident, malformed img tags sent by origin servers (which weren't themselves by themselves unsafe in browsers) caused their edge servers to append garbage (including miscellaneous secure data) from heap memory to some requests that got indexed by search engines.

      Sanitization is always the sole responsibility of the consumer of the content to make sure it presents any inbound data safely. Sometimes the "consumer" is colocated on the server (e.g. for server rendered HTML + no native/API users) but many times it's not.

      • dylan604 3 hours ago

        > If you sanitize on the server, you are making assumptions about what is safe/unsafe for your clients.

        No. I'm making decisions on what is safe for my server. I'm a back end guy, I don't really care about your front end code. I will never deem your front end code's requests as trustworthy. If the front end code cannot properly handle encoding, the back end code will do what it needs to do to not allow stupid string injection attacks. I don't know where your request has been. Just because you think it came from your code in the browser does not mean that was the last place it was altered before hitting the back end.

        • auxiliarymoose 2 hours ago

          How can user input be unsafe on the server? Are you evaluating it somehow?

          User-generated content shouldn't be trusted in that way (inbound requests from client, data fields authored by users, etc.)

          • dylan604 2 hours ago

            Is that a serious question?

            INSERT INTO table (user_name) VALUES ...

            Are you one of today's 10000 on server side sanitizing of user data?

            • auxiliarymoose 2 hours ago

              Communicating with a SQL driver by concatenating strings containing user input and then evaluating it? wat?

              I'm very interested in what tech stack you are using where this is a problem.

              • jfengel an hour ago

                People do it all the time, on any tech stack that lets you execute command strings. A lot of of early databases didn't even support things like parameterized inserts.

            • krapp 2 hours ago

              Are you one of today's 10000 on using parameterized queries and prepared statements?

              Unless you're doing something stupid like concatenating strings into SQL queries, there's no need to "sanitize" anything going into a database. SQL injection is a solved problem.

              Coming from the database and sending to the client, sure. But unless you're doing something stupid like concatenating strings into SQL statements it hasn't been necessary to "sanitize" data going into a database in ages.

              Edit: I didn't realize until I reread this comment that I repeated part of it twice, but I'm keeping it in because it bears repeating.

              • hoppp an hour ago

                SQL injection is solved if you use dependencies that solve it of course.

                Other than SQL injection there is command or log injection, file names need to be sanitized or any user uploaded content for XSS and that includes images. Any incoming JSON data should be sanitized, extra fields removed etc.

                Log injection is a pretty nasty sort of hack that depending on how the logs are processed can lead to XSS or Command injection

    • strbean 5 hours ago

      It can be a complicated and error-prone process, mainly in scenarios where you have multiple mediums that require different sanitizers. Obviously you should do it. But in such scenarios, the best practice is to sanitize as close to the place it is used as possible. I've seen terrible codebases where they tried to apply multiple layers of sanitization on user input before storing to the DB, then reverse the unneeded layers before output. Obviously this didn't work.

      Point being, if you can move sanitization even closer to where it is used, and that sanitization is actually provided by the standard library of the platform in question, that's a massive win.

      • dylan604 3 hours ago

        You're making a bad assumption that client side code was the last place the submitted string was altered in the path to the server. The man in the middle might have a different idea and should always be protected against on the server where it is the last place to sanitize it.

      • immibis 4 hours ago

        By "sanitise" what's really meant is usually "escape". User typed their display name as <script>. You want the screen to say their display name, which is <script>. Therefore you send &lt;script&gt;. That's not their display name - that's just what you write in HTML to get their display name to appear on the screen. You shouldn't store it in the database in the display_name column.

        • strbean 4 hours ago

          Agreed. The codebase I'm thinking of was html encoding stuff before storing it, then when they needed to e.g. send an SMS, trying to remember to decode. Terrible.

    • padjo 4 hours ago

      Sanitize as close as possible to where it is used is usually best, then you don’t have to keep track of what’s sanitized and what’s not sanitized for very long.

      (Especially important if sanitation is not idempotent!)

    • jsmith99 5 hours ago

      It's arguably easier just to sanitise at display time otherwise you have problems like double escaping.

      • bpt3 4 hours ago

        Easier does not mean better, which seems to be true in this case given the many, many vulnerabilities that have been exploited over the years due to a lack of input sanitization.

        • padjo 4 hours ago

          In this case easier is actually better. Sanitize a string at the point where you are going to use it. The locality makes it easy to verify that sanitation has been done correctly for the context. The alternative means you have to maintain a chain of custody for the string and ensure it is safe.

          • dylan604 3 hours ago

            if you are using it at the client, sure, but then why is the server involved? if you are sending it to the server, you need to treat it like it is always coming from a hacker with very bad intentions. i don't care where the data comes from, my server will sanitize it for its own protection. after all, just because it left "clean" from your browser does not mean it was not interfered with elsewhere upstream TLS be damned. if we've double encoded something, that's fine, it won't blow up the server. at the end of that day, that's what is most important. if some double decoding doesn't happen correctly on the client, then <shrugEmoji>

modinfo 4 hours ago
  • exdeejay_ 2 hours ago

    This code only does the most basic and naive regex filtering that even a beginner XSS course's inputs would work against. With the Node example code and input string:

      <p>Hello <scr<script>ipt>alert(1)</scr<script>ipt> World</p>
    
    The program outputs:

      $ node .
      <p>Hello <script>alert(1)</script> World</p>
      {
        sanitizedHTML: '<p>Hello <script>alert(1)</script> World</p>',
        wasModified: true,
        removedElements: [],
        removedAttributes: []
      }
    
    Asking a chatbot to make a security function and then posting it for others to use without even reviewing it is not only disrespectful, but dangerous and grossly negligent. Please take this down.