beala 15 hours ago

If I may attempt to summarize:

CORS is a mechanism for servers to explicitly tell browsers which cross-origin requests can read responses. By default, browsers block cross-origin scripts from reading responses. Unless explicitly permitted, the response cannot be read by the requesting domain.

For example, a script on evil.com might send a request to bank.com/transactions to try and read the victim's transaction history. The browser allows the request to reach bank.com, but blocks evil.com from reading the response.

CSRF protection prevents malicious cross-origin requests from performing unauthorized actions on behalf of an authenticated user. If a script on evil.com sends a request to perform actions on bank.com (e.g., transferring money by requesting bank.com/transfer?from=victim&to=hacker), the server-side CSRF protection at bank.com rejects it (likely because the request because it doesn’t contains a secret CSRF token).

In other words, CSRF protection is about write protection, preventing unauthorized cross-origin actions, while CORS is about read protection, controlling who can read cross-origin responses.

  • chuckadams 14 hours ago

    > In other words, CSRF protection is about write protection, preventing unauthorized cross-origin actions, while CORS is about read protection, controlling who can read cross-origin responses.

    I apologize for the length of the reply, I didn't have time to write a short one. But to sum up, CSRF is about writes, while CORS protects both reads and writes, and they're two very different things.

    CSRF is a sort of "vulnerability", but really just a fact of the open web: that any site can create a form that POSTs any data to any other site. If you're on forum.evil.com and click the "reply" button (or anything at all), that could instead POST a transfer request to your.bank.com, and if you happen to be logged in, it'll happen with your currently authenticated session. When the bank implements CSRF protection, it ensures that a known token on the page (sometimes communicated through headers instead) is sent with the transfer. If that token isn't present, or doesn't match what's expected, reject the request. It ensures that only forms generated by bank.com will have any effect, and it works because evil.com can't use JS to read the content of the page from bank.com due to cross-origin restrictions.

    CORS on the other hand is an escape hatch from a different cross-origin security mechanism that browsers enable by default: that a script on foo.com cannot make requests to bar.com except for "simple" requests (the definition of which is anything but simple; just assume any request that can do anything interesting is blocked). CORS is a way for bar.com to declare with a header that foo.com is in fact allowed to make such requests, and to drop the normal cross-origin blocking that would occur otherwise. You only have to use CORS to remove restrictions: if you do nothing, maximum security is the default. It's also strictly a browser technology: non-browser user agents do not need or use CORS and can call any API anytime.

    Fun fact: you don't need CSRF protection at all if your API is strictly JSON-based, or uses any content type that isn't one of the built-in form enclosure types. The Powers That Be are talking about adding a json enclosure type to forms, but submitting it would be subject to cross-origin restrictions, same as it is with JS.

    • PantaloonFlames 14 hours ago

      > It's a nice way to make an API "public", or would be if CORS supported a goddam wildcard for the host.

      I don't get what you mean. Access-Control-Allow-Origin supports a wildcard. https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Ac...

      • LudwigNagasena 13 hours ago

        Also, nothing prevents you from checking the host server-side according to arbitrary logic and putting it into the CORS header dynamically.

        • Retr0id 23 minutes ago

          Nothing prevents this, but the client must still emit preflight checks (an extra round-trip) for every endpoint, and again if headers need to change.

      • chuckadams 14 hours ago

        Yah you saw that before I edited it out, I realized that gripe was actually about the behavior of AWS API Gateway rather than CORS itself. It hates the wildcard, or something, I can't even remember exactly what the issue was. Thus did I zap it.

    • jcmfernandes 13 hours ago

      > Fun fact: you don't need CSRF protection at all if your API is strictly JSON-based, or uses any content type that isn't one of the built-in form enclosure types. The Powers That Be are talking about adding a json enclosure type to forms, but submitting it would be subject to cross-origin restrictions, same as it is with JS.

      AFAIK, this is not totally accurate because the internet is a messy place. For example, the OAuth authorization code grant flow blesses passing the authorization code to the relying party (RP) in a GET request as a query parameter. The RP must protect against CSRF when receiving the authorization code.

      • catlifeonmars 8 hours ago

        > The RP must protect against CSRF when receiving the authorization code

        Is this via PKCE or (ID) token nonce validation?

      • chuckadams 13 hours ago

        Ah yes, good catch. That's what the `state` parameter is about, right? But I'll weasel out and say that lack of a content type (being a GET) is one of the built-in types too ;)

    • motorest 6 hours ago

      > If you're on forum.evil.com and click the "reply" button (or anything at all), that could instead POST a transfer request to your.bank.com, and if you happen to be logged in, it'll happen with your currently authenticated session.

      It should be clarified that by "currently authenticated session" it actually means cookies, which browsers are designed to automatically include in requests when they send a request to the same origin domain.

      That's why CSRF attacks work: the attacker tricks the users' browser to send a request to a domain where the user was already authenticated, and that automatically inserts the user's session info in the request thus authorizing the request.

      CSRF tokens work by adding a kind of API key that the browser somehow loads in a way that it is not stored as cookies. Servers then check for the CSRF token in each request to determine if the request is authorized. CSRF attacks alone don't work because they just forward cookies, and they do not include the CSRF token thus are rejected.

  • layer8 14 hours ago

    To add to that:

    CORS is implemented by browsers based on standardized HTTP headers. It’s a web-standard browser-level mechanism.

    CSRF protection is implemented server-side (plus parts of the client-side code) based on tokens and/or custom headers. It’s an application-specific mechanism that the browser is agnostic about.

    • femto113 13 hours ago

      Some additional color:

      CORS today is just an annoying artifact of a poorly conceived idea about domain names somehow being a meaningful security boundary. It never amounted to anything more than a server asking the client not to do something with no mechanism to force the client to comply and no direct way for the server to tell if the client is complying. It has never offered any security value, workarounds were developed before it even became a settled standard. It's so much more likely to prevent legitimate use than protect against illegitimate use that browsers typically include a way to turn it off.

      With CSRF the idea is that the server wants to be able verify that a request from a client is one it invited (most commonly that a POST comes from a form that it served in an earlier GET). It's entirely up to the server to design the mechanism for that, the client typically has no idea its happening (it's just feeding back to the server on a later request something it got from the server on a previous request). Also notable is despite the "cross-site" part of the name it doesn't really have any direct relationship to "sites" or domains, servers can and do use the exact same mechanisms to detect or prevent issues like accidentally submitting the same form twice.

      • tsimionescu an hour ago

        CSRF wouldn't work as easily if CORS (or, more precisely, the single origin policy that CORS allows you to circumvent in controlled ways) weren't there. And both cookies and TLS also rely entirely on domains being a meaningful security boundary.

        Without the SOP, evil.com could simply use JS to read the pages from bank.com, get a valid CSRF token, and then ask the browser to send a request to bank.com using its own CSRF token and the user's cookie. This maybe could be circumvented by tying the cookie and the original CSRF token together, but there might be other ways around that. Plus, if the browser wasn't enforcing the SOP, then the different tabs might just be able to read each other's variables, since that is a feature today for multiple tabs accessing the same origin.

      • smagin an hour ago

        well it does make sense to assume that by default different origins belong to different people, and some of those people don't have to behave friendly to each other.

        There is little server can do with that, because of the request-based model. The state that persists between requests lives in cookies, and it's browser job not to expose those cookies all around. Turning off single origin policy would be a terrible idea. For one, it makes CSRF work by not allowing cross-origin reads.

      • robocat 6 hours ago

        > domain names somehow being a meaningful security boundary

        That's your Internet opinion. Perhaps expand on why you think that?

        I reckon domains have quite a few strong security features. Strong enough that we use them to help access valuable accounts

  • alexashka 14 hours ago

    Regarding CSRF - how would I be authenticated to do actions on bank.com when I'm on evil.com?

    It seems like the problem is at the level of login information somehow crossing domain boundaries?

    What stops a script on evil.com from going to bank.com to get a CSRF token and then including that in their evil request?

    • chuckadams 13 hours ago

      > It seems like the problem is at the level of login information somehow crossing domain boundaries?

      The login information isn't so much crossing boundaries -- evil.com can't read your session cookie on bank.com -- but cookies that don't set a SameSite attribute allow anyone to send that information on your behalf, and effectively act as you in that request. Textbook example of a "confused deputy" attack.

      > What stops a script on evil.com from going to bank.com to get a CSRF token and then including that in their evil request?

      The token is either stored on the page that bank.com sends (for a html form) or was sent in a header and stored in local storage (for API clients). In neither case can evil.com read that information, due to cross-origin restrictions, and it changes with every form.

      • alexashka 12 hours ago

        > In neither case can evil.com read that information

        What stops evil.com from having an API endpoint called evil.com/returnBankCSRFToken that goes to bank.com, scrapes token and returns it?

        CSRF tokens are just part of an html form - they are not hidden or obscured and thus scraping them is trivial.

        When I go to evil.com, it calls the endpoint, gets token and sends a request to bank.com using said token, thus bypassing CSRF?

        • chii 5 hours ago

          > What stops evil.com from having an API endpoint called evil.com/returnBankCSRFToken that goes to bank.com, scrapes token and returns it?

          so evil.com will now require some sort of authentication mechanism with bank.com to scrape a valid CSRF token. If this authentication works (either if the user willingly gave their login information to evil.com, or they have a deal with bank.com directly), then there's no issues, and it works as expected.

        • hkpack 12 hours ago

          Nothing stops, but it will be a different CSRF token, which will not match to the one generated for the original page.

          Server keeps track of which CSRF token is given to which client using cookies (usually some form of SessionID), and stores it on the server somewhere.

          It is a very common pattern and all frameworks support it with the concept of "sessions" on the back end.

          • hansonkd 11 hours ago

            > stores it on the server somewhere

            you don't need to store anything on the server. cookies for that domain are sent with the request and it is enough for the server to check its cookie with the csrf request data.

            browsers would send the bank.com cookies with the bank.com request. It is security built into the browser which is why its so important to use secure browsers and secure cookies.

            If the malicious user convinces the user to use an insecure browser you can circumvent CSRF, but at that point there are probably other exploits you can do.

    • gavinsyancey 14 hours ago

      When you logged in to bank.com, it set a cookie that your browser presents when it makes any request to bank.com, regardless of how it was initiated (i.e. it would still send the cookie on a cross-site XHR initiated by evil.com's JavaScript).

      > What stops a script on evil.com from going to bank.com...

      CORS

      • JimDabell 8 hours ago

        > > What stops a script on evil.com from going to bank.com...

        > CORS

        CORS does the exact opposite to what you think.

        Those types of cross-site requests are forbidden by default by the Same-Origin Policy (SOP) and CORS is designed so you can allow those requests where they would otherwise be forbidden.

        CORS is not a security barrier. Adding CORS removes a security barrier.

      • Macha 13 hours ago

        Note that cookies now have the SameSite cookie option which should prevent this

      • alexashka 13 hours ago

        > it set a cookie that your browser presents when it makes any request to bank.com, regardless of how it was initiated

        Right, this seems like a very bad idea and now everyone has to do CSRF because of it?

        CORS doesn't prevent evil.com from sending a reqeust to bank.com, it only prevents reading the response, no?

        So again, what stops evil.com from sending a request to say transfer 1 BBBBillion dollars to bank.com and including a CSRF token it gets from visiting bank.com?

        • blincoln 37 minutes ago

          CSRF tokens are only valid for a particular user, or session, or sometimes even a particular page load.

          If there's a way for evil.com to obtain a CSRF token that's valid for an arbitrary user, it's a vulnerability, just like if evil.com could obtain the user's session token, JWT, etc.

        • chuckadams 13 hours ago

          > Right, this seems like a very bad idea and now everyone has to do CSRF because of it?

          Yep, that pretty much sums it up.

          CORS doesn't have to enter into it though: evil.com just has no way to read the CSRF token from bank.com, it's a one-time password that changes with every form (one hopes) and it's embedded in places that it can't access. It can send an arbitrary POST request, but no script originating from evil.com (or anywhere that is not bank.com) can get at the token it would need for that post to get past the CSRF prevention layer.

        • voxic11 13 hours ago

          > it only prevents reading the response, no?

          > So again, what stops evil.com from sending a request to say transfer 1 BBBBillion dollars to bank.com and including a CSRF token it gets from visiting bank.com?

          It can't read the response from bank.com so it can't read the CSRF token. The token is basically proving the caller is allowed to read bank.com with the user's credentials. Which is only possible if the caller lives on bank.com or a origin that bank.com has allowed via CORS.

Scaevolus 18 hours ago

> JS-initiated requests are not allowed cross-site by default anyway

Incorrect. You can use fetch() to initiate cross-site requests as long as you only use the allowed headers.

https://developer.mozilla.org/en-US/docs/Glossary/CORS-safel...

  • duskwuff 17 hours ago

    And JS can also indirectly initiate requests for resource or page fetches, e.g. by creating image tags or popup windows. It can't see the results directly, but it can make some inferences.

    • 1oooqooq 16 hours ago

      there are so, so, so many ways to read this data back it's not even fun.

      • Muromec 16 hours ago

        There are ways, but they generally need a cooperation of both sides of the inter-domain boundary. What you generally can't do is make arbitrary reads from the context of other domain (e.g. call GET on their api and read a result) into your domain without them explicitly allowing it.

        • duskwuff 16 hours ago

          Right. What you can sometimes do is observe the effects of the content being loaded, e.g. see the dimensions of an image element change when its content is loaded.

          • RandomDistort 16 hours ago

            Is there some document somewhere that lists all the potential ways of doing stuff like this?

            • Herrera 14 hours ago

              Yeah, https://xsleaks.dev tracks most of the known ways to leak cross-origin data.

              • smagin 14 hours ago

                oh hell yes. And oh yes iframes and postmessages, of course people would setup them incorrectly and even if they do some (probably not that important but still) data will leak if you're creative enough. Thanks for the link!

  • Evidlo 4 hours ago

    Is that actually true? This SO seems to contradict that: https://stackoverflow.com/questions/44121593/sending-a-simpl...

    I just want to fetch publicly available information from my client-side app, but CORS gets in the way and forces me to use a sketchy CORS proxy. Makes me really hate CORS

  • smagin 14 hours ago

    you're right, you can initiate cross-site requests that _could be_ form submissions. It was even in the post but I thought I'd omit that bit for clarity. I should have decided otherwise.

yonran 12 hours ago

To respond to a question in the blog post:

>> The motivation is that the <form> element from HTML 4.0 (which predates cross-site fetch() and XMLHttpRequest) can submit simple requests to any origin,…

> Question to readers: How is that in line with the SameSite initiative?

I actually added that little paragraph to the MDN CORS article in 2022 (https://github.com/mdn/content/pull/20922) to clarify where the term “simple request” from CORS came from, since previously the article only said that it is not mentioned in the fetch spec. You’re right that the paragraph did not mention the 2019 CSRF prevention in browsers that support or default to SameSite=Lax (https://www.ietf.org/archive/id/draft-ietf-httpbis-rfc6265bi...), so cross-site forms with method=POST will not have cookies anymore unless the server created the cookie with SameSite=None.

It is quite confusing that SameSite was added seemingly independently of CORS preflight. I wonder why browser makers didn’t just make all cross-origin POST requests require a preflight request instead of making same-site-flag a field of each cookie.

matsemann 17 hours ago

One thing to note is that if you think you're safe from having to use csrf due to only serving endpoints you yourself consume by posting json, some libraries (like django rest framework) can also opaquely handle html forms if the content type header is set, accidentally opening you up for someone having a form on their site posting on users' behalf to yours.

  • chuckadams 12 hours ago

    First thing I do on a Laravel site is add a middleware to all my API routes that allows only blessed content types (usually application/json and application/x-ndjson). With Symfony it's a couple lines of yaml.

ListeningPie 5 hours ago

At the bottom the article links to this discussion, but not your other articles. Did you happen to find the discussion on Hacker news or post it yourself?

If you happened to have found it, has someone systematized linking their articles to their respective hacker news discussion?.

  • smagin an hour ago

    This is my blog. I haven't posted all the articles from there to hackernews. I think some of them are not worth discussing because too old or too poorly written, but some are just not posted because I didn't think of it at the time. Thanks for the interest, I will re-read what I wrote and post some.

IgorPartola 18 hours ago

What I never quite grasped despite working with HTTP for decades now: how come before CORS was a thing that you could send a request to any arbitrary endpoint that isn’t the page origin just not be able to see the response. Was this an accidental thing that made it into the spec? Was this done on purpose in anticipation of XSS-I-mean-mashups-I-mean-web-apps? Was it just what the dominant browser did and others just followed suit?

  • Muromec 18 hours ago

    That makes perfect sense in the early model of internet where everything was just links and documents. You can make an HTML form with action attribute pointing to a different domain. That's a feature, not a bug and isn't a security vulnerability in itself. Common use for this is to make "search this site in google" widgets.

    Then you can make the form make post requests by changing a method. Nothing wrong with this either -- the browser will navigate there and serve the page to user, not to the origin server.

    What makes it problematic is the combination of cookies from the destination domain and programmatic input or hidden field from the origin domain. But the only problem it can cause is the side-effects that POST request causes on the back end, as it again doesn't let the origin page to read the result (i.e. content doesn't cross the domain boundary).

    Now in the world on JS applications that make requests on behalf of the user without any input and backend servers acting on POST requests as user input, the previously ignored side-effects are the main use and are a much bigger problem.

    • smagin 14 hours ago

      "search this site in google" shouldn't even be a POST request, but yeah, when we'll have better defaults for cookies it should work nicer. And if you are a web developer, you should check your session cookie attributes and explicitly set them to SameSite=Lax HttpOnly unless your frameworks does that already and unless you know what you're doing

      • Muromec 14 hours ago

        It was GET request, but the point is -- you can make the request from the browser to a different domain by making a form. With JS and DOM you can make a hidden iframe and make the request without user initiating it or noticing even, but in both cases you don't get to read the result.

  • fweimer 17 hours ago

    I think it once was a common design pattern to have static HTML with a form that was submitted to a different server on a different domain, or at least a different protocol. For example, login forms served over HTTP were common, but the actual POST request was sent over HTTPS (which at least hid the username/password from passive observers). When Javascript added the capability to perform client-side form validation, it inherited this cross-domain POST capability.

    I don't know why <script> has the ability to perform cross-domain reads (the exception to the not-able-see-the-response rule). I doubt anyone had CDNs with popular Javascript on their minds when this was set in stone.

    • Muromec 17 hours ago

      >I don't know why <script> has the ability to perform cross-domain reads

      That's because all scripts loaded on the page are operating in the same global namespace of the same javascript vm, which has origin of the page. Since there are no contexts granularity below the page level in VM, they have to either share it or not work.

      You can't read it however, you can ask browser to execute, the same way you can ask it to show an image. It's just execution can have result in a read as side-effect by calling a callback or setting well-know global variable

      • fweimer 16 hours ago

        What I meant is that from a 1996 perspective, I don't see a good reason not to block cross-domain <script> loads. The risks must have already been obvious at the time (applications serving dynamically generated scripts that can execute out of origin and reveal otherwise inaccessible information). And the nascent web ad business had not yet adopted cross-domain script injection as the delivery method.

        • swatcoder 16 hours ago

          The threat model of one site leveraging the user's browser to covertly and maliciously engage with a third-party site was something that emerged and matured gradually, as was the idea that a browser was somehow duty-bound to do something about it.

          Browsers were just software that rendered documents and ran their scripts, and it was taken for granted that anything they did was something the user wanted or would at least hold personal responsibility for. Outside of corporate IT environments, users didn't expect browsers to be nannying them with limitations inserted by anxious vendors and were more interested in seeing new capabilities become available than they were in seeing capabilities narrowed in the name of "safety" or "security".

          In that light, being able to acccess resources from other domains adds many exciting capabilities to a web session and opens up all kinds of innovate types of documents and web applications.

          It was a long and very gradual shift from that world to the one we're in now, where there's basically a cartel of three browser engines that decide what people can and can't do. That change is mostly for the best on net, when it comes to enabling a web that can offer better assurances around access to high-value personal and commercial data, but it took a while for a consensus to form that this was better than just having a more liberated and capable tool on one's computer.

        • Muromec 16 hours ago

          There is no reason for browser to block them if the page is static and written by hand. If I added this script to my page, then I want to do the funny thing and if the script misbehaves, I remove it.

          • fweimer 16 hours ago

            Are you talking about the risk from inclusion to the including page? The concern about cross-origin requests was in the other direction (to the remote resource, not the including page). That concern applies to <script> inclusion as well, in addition to the possibility of running unwanted or incompatible script code.

            • Muromec 15 hours ago

              Yes, I was talking about the risk from the perspective of the including page. From the perspective of the risk to remote it makes even less sense from the 90ies point of view. Data isn't supposed to be in javascript anyway, it should be in XML. It's again on you (the remote) if you expose your secrets in the javascript that is dynamically generated per user.

              With a hindsight from this year -- of course you have a point.

              • fweimer 15 hours ago

                Ahh, the data-in-XML argument is indeed very convincing from a historic perspective.

                • Muromec 14 hours ago

                  XHR is called xmlthttprequest for a reason, but that was added in 2000ies I think. SOAP was late 90ies, but I don't think you can call it from browser. There was no reason to have data in javascript before DOM and XHR which were late additions. All the stuff was in the page itself rendered by server and you don't get that across the domain boundary.

                  • thaumasiotes 7 hours ago

                    > 2000ies

                    Now this is interesting. I've seen a lot of "40ties", "90ies", etc around, and I'm not sure why people do that. But once you've done it, it's clear how to read the text. Most of the non-numeric suffix is redundant; people mean "forties", not "forty-ties".

                    But "2000ies" has no potential redundancy and no plausible pronunciation. It's spelled as if you're supposed to pronounce it "two thousandies", but there's no such thing as a thousandy.

            • edoceo 15 hours ago

              The point being, presumably, the page author explicitly chose to include a cross-domain script

              • Muromec 15 hours ago

                The script author however didn't. CORS is more about the remote giving consent to exfiltrate the data than it is about preventing injecting the data into it. You can always reject the data coming in

  • PantaloonFlames 18 hours ago

    I don’t have a Time Machine or a 1998-era browser but I’m not sure what you described was the case. I think in the before times, a browser could send a request to any arbitrary endpoint that was not the page origin, and it could also see the response. I might be wrong.

    But anyway, ancient history.

  • LegionMammal978 18 hours ago

    You still can make many kinds of requests [0] to an arbitrary endpoint that isn't the page origin, without being able to see the response. (Basically, anything that a <link> or a form submission could do.) And you can't include any cookies or other credentials in the request unless they have SameSite=None (except on ancient browsers), and if you do, then you still can't see the response unless the endpoint opts in.

    Really, there's exactly one thing that the mandatory CORS headers protect against: endpoints that authorize the request based on the requester's IP address and nothing else. (The biggest case of this would be local addresses in the requester's network, but they've been planning on adding even more mandatory headers for that [1].) They don't protect against data exfiltration, third-party cookie exfiltration (that's what the SameSite directive is for), or any other such attack vector.

    [0] https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS#simpl...

    [1] https://wicg.github.io/private-network-access/

    • IgorPartola 18 hours ago

      Yes I know this is still the case today. My question is: how did this come about? It seems to me that in the olden days the idea of cross origin requests wasn’t really needed as you’d be lucky to have your own domain name, let alone make requests to separate services in an era of static HTML pages with no CSS or JavaScript. What exactly was this feature for? Or was it not a feature and just an oversight of the security model that got codified into the spec?

      • hinkley 17 hours ago

        The field of web applications didn’t really blow open until we were into the DotCom era. By then Berners-Lee’s concept for the web was already almost ten years old. I think it’s hard for people to conceive today what it was like to have to own a bookshelf of books in order to be a productive programmer on Windows, for instance. Programming paradigms were measured literally in shelf-feet.

        Practically part of the reason Java took off was it was birthed onto the Internet and came with Javadoc. And even then the spec for Java Server Pages was so “late” to the party that I had already worked on my first web framework when the draft came out, for a company that was already on its second templating engine. Which put me in rarified air that I did not appreciate at the time.

        It was the Wild West and not in the gunslinger sense, but in the “one pair of wire cutters could isolate an entire community” sense.

      • LegionMammal978 18 hours ago

        Hotlinking <img>s from other domains has been a thing forever, as far as I'm aware, and that's the archetypical example of a cross-origin request. <iframe>s (or rather, <frame>s) are another old example. And it's not like those would've been considered a security issue, since at worst it would eat up the other domain's bandwidth. The current status quo is a restriction on what scripts are allowed to do, compared to those elements.

        • IgorPartola 17 hours ago

          By definition those do allow you to see the response so that’s not really what is being discussed.

          • Muromec 17 hours ago

            Not really. You as an application generally can't read across origin boundaries, but you can ask browser to show it to the user.

actinium226 14 hours ago

There's something that continues to confuse me about CSRF protection.

What's to stop an attacker, let's call her Eve, from going to goodsite.com, getting a CSRF token, putting it on badsite.com, and duping Alice into submitting a request to goodsite.com from badsite.com?

  • bastawhiz 14 hours ago

    The csrf token is ideally tied to your session. If it's anonymous, Eve didn't need Alice to visit a page in the first place. If it's tied to a session, Eve can't create a working token for Alice.

    • smagin 4 hours ago

      yup. You didn't imply it but just in case -- this token shouldn't be the same as session token. Session tokens should be `HttpOnly`, so that we don't even expose it to javascript

      • blincoln 25 minutes ago

        The HttpOnly flag isn't really practical in modern web apps where so much logic runs in JS in the browser and makes requests to APIs. It's a leftover from an earlier era of web app architecture.

        If it can be enabled without breaking something, sure, its a good idea, but unless your app is 2000s-era ASP.NET code or CGI script, preventing browser-side JS from accessing the session token will probably break something.

  • preinheimer 13 hours ago

    Traditionally csrf tokens had two parts: something in a cookie(/server side data store that used a cookie as the id), and something in a form element on the page.

    So while an attacker could trick your browser to making a request to get the cookie, and trick your browser into submitting arbitrary form data, they couldn’t get the csrf tokens to match.

  • theogravity 14 hours ago

    The CSRF token is usually stored in a cookie. I guess one could try stealing the cookie assuming the CSRF token hasn't been consumed.

    But if one's cookie happens to be stolen it can be assumed they already have access to your session in general anyways making CSRF moot.

shermantanktop 10 hours ago

Whatever else these things do, one thing they don’t do is support easy diagnostic tracing when a legitimate use case isn’t quite configured properly.

I have stared many times at opaque errors that suggested (to me) one possible cause when the truth ended up being totally different.

mjevans 18 hours ago

A post I was replying to got deleted, but I'd still like to gripe about the positives and negatives of the current 'web browser + security things' model.

Better in the sense of not being locked into an outdated and possibly protocol insecure crypto-system model.

Worse in the sense: Random code from the Internet shouldn't have any chance to touch user credentials. At most it should be able to introspect the status of authentication, list of privileges, and what the end user has told their browser to do with that and the webpage.

If it weren't for E.G. IE6 and major companies allergic to things not invented there we'd have stronger security foundations but easier end user interfaces to manage them. IRL metaphors such as a key ring and use of a key (or figurative representations like a cash / credit card in a digital wallet) could be part of the user interface and provide context for _when_ to offer such tokens.

  • hu3 16 hours ago

    > Random code from the Internet shouldn't have any chance to touch user credentials

    One thing that helps with this are HttpOnly cookies.

    "A cookie with the HttpOnly attribute can't be accessed by JavaScript, for example using Document.cookie; it can only be accessed when it reaches the server. Cookies that persist user sessions for example should have the HttpOnly attribute set — it would be really insecure to make them available to JavaScript. This precaution helps mitigate cross-site scripting (XSS) attacks."

    https://developer.mozilla.org/en-US/docs/Web/HTTP/Cookies

    • mjevans 11 hours ago

      That does nothing to address Credentialing and Authorization issues.

      • hu3 10 hours ago

        How so? Cookies are the most common way to implement persistent user sessions and by consequence, authn/authz.

        If it can't be accessed by JS, then it at least does something, to say the least.

        Could you expand your reasoning?

webdever 17 hours ago

subdomains are not always the same origin. See "public suffix list". For an example think of abc.github.io vs def.github.io

I didn't get the part at the end about trusting browsers. As a website owner you can't rely on browers as hackers don't have to use a browser to send requests and read responses

  • Muromec 17 hours ago

    You do rely on browsers to isolate contexts. The problem with CSRF is that data leaks from one privileged context to another (think of reading from kernel memory of another vm on the same host on AWS). If you don't have the browser, you don't have the user session to abuse in the first place.

    The whole thing boils down to this:

    - browser has two tabs -- one with authenticated session to web banking, another with your bad app

    - you as a bad app can ask browser to make an http request to the bank API and the browser will not just happily do it, but also attach the cookie from the authenticated session the user has opened in the other tab. That's CSRF and it's not even a bug

    - you however can't as a bad app read the response unless the bank API tells browser you are allowed to, which is what CORS is for. maybe you have an integration with them or something

    Browser is holding both contexts and is there to enforce what data can cross the domain boundary. No browser, no problem

  • hinkley 17 hours ago

    Or abc.wordpress.com

    Also a lot of university departments and divisions in large enough corporations need to be treated like separate entities.

TheRealPomax 18 hours ago

And why do browsers not let users go "I don't care about what this server's headers say, you will do as I say because I clicked the little checkbox that says I know better than you".

(On which note, no CSRF/CORS post is complete without talking about CSP, too)

  • tedunangst 16 hours ago

    Because then you will get users whining I didn't know what the checkbox did and you shouldn't have let me check it.

  • syntheticcdo 17 hours ago

    You can! Go ahead and launch chrome with the --disable-web-security argument.

  • LegionMammal978 18 hours ago

    I'd think SameSite/Secure directives on cookies are genuinely important to avoid any malicious website from stealing all your credentials. Otherwise, I'd imagine it's the usual "Because those dastardly corporations will tell people to disable it, just because they can't get it to work!!!"

  • yoavm 16 hours ago

    I wish the browser would just say "This website is trying to fetch data from example.com, do you agree?"

    The whole CORS thing is so off and it destroyed to ability to build so many things on the internet. I often think it protects websites more than it protects users. We could have at least allowed making cookie-less requests.

    • bastawhiz 14 hours ago

      > This website is trying to fetch data from example.com, do you agree

      I don't know, do I? How am I supposed to know? How am I supposed to explain to my mom when to click yes and when not to? The average person shouldn't ever have to think about this.

      Imagine if any website could ask to access any other website, for an innocent reason, and then scrape whatever account information they wanted? "Do you want to let this website access google.com?" Great, now your whole digital life belongs to that page. It's a privacy nightmare.

      > it destroyed to ability to build so many things on the internet

      It only destroyed the ability for any website to access another website as the current user. What it destroyed is the ability for a web page to impersonate users.

      • yoavm 6 hours ago

        Reading cookie-less responses is also forbidden. I couldn't read your Google account information, just make an anonymous Google search through your browser. I fail to see what's the big deal.

        • smagin 40 minutes ago

          Why would you need that?

          Also, one thing I can speculate that phishing would become even easier if such things were allowed

  • siva7 18 hours ago

    Almost as pointless as "Yes, accept all cookies" for our european friends.