cross-posted from: https://lemmy.nz/post/28397398
The suspension triggered strong responses across social media and beyond. Hashtags like #CancelDisneyPlus and #CancelHulu trended as users shared screenshots of their canceled subscriptions.
With cancellations surging, many subscribers reported technical issues. On Reddit’s r/Fauxmoi, one post read, “The page to cancel your Hulu/Disney+ subscription keeps crashing.”
“Crashes”? How convenient.
The cancellation page specifically. Everything else is fine.
On one hand, could be a “crash”. On the other hand, tons of websites break when they get a little extra traffic.
Side tangent, seems odd to me this is still a thing. Most company websites aren’t hosted on premises, so do these services like (i assume) AWS not scale for when there’s traffic? Squarespace has been advertising for years that it will scale up if there’s extra traffic. I’ve never tested it but still.
Scaling is only for companies that have not been allowed to purchase and enshittify every serious competitor. (Pixar, Marvel, HBO…)
I feel like Disney has internal stuff? I listened to a podcast where an ex employee changed the fonts on a bunch of stuff to be wingdings, etc, and made everything unusable.
You have to design for scalability. Bottlenecks may be wherever. Even if their virtual server CPU and RAM can scale up, other stuff may be bottlenecks. Maybe the connection to the DB. Maybe the DB is elsewhere and doesn’t scale. Can’t really reasonably guess from the outside.
Mass cancellation is not usually a thing they would design around bottle-necks. It also doesn’t add value to them.
It could also be poor graceful failure. What we see as a crash may be from some unavailability deep in a long pipeline of services.
If your page is just static, e.g. no login, no interaction, everyone always sees the same thing then it scales easily. Scaling means you copy the site to more servers. Now imagine a user adds a comment. Now you need to add the comment to every copy of your site, so that everyone sees it regardless of which server they use. So a comment creates more work the more servers you use. And this is where scaling becomes a complex science, that you need to manually prepare for as a software developer. You need to figure out what data will be stored where and accessed how.
Caching servers, they self replicate when a change is committed, then send back a signal to main server that task has completed
I am not sure what you are trying to say?
Oh right, I skipped a part. It is not really a dev complexity prep issue. You build the database that serves the comments etc in as of in one place, then you deploy cache servers for scaling. They self replicate, so a comment in California gets commited to the dbase, the server in new York pulls the info over from the Cali change, it sends back that it is synced with the change. And vice versa. The caching servers do the work, not your program.
That entirely depends on your application. What you described is one possible approach, that will only work in specific circumstances.
Besides application specifics, its how the internet works currently to give low latency. AWS, Azure, Linode etc have data centers across the globe to replicate data near where the people are.
Again, yes and no. While you are right pretty much every larger website will use a cache server in some way (at least in form of a CDN), cache servers really don’t help you in any way for things like a customer canceling their subscription, which is what this post is about. That is all back-end work. Yes, those are probably those app specifics you mention but glossing over them misses the point why solving this is not as easy as enabling auto-scaleing.
Scaling has a budget, I’m sure. They’ll only pay for so much.