• 0 Posts
  • 19 Comments
Joined 1 year ago
cake
Cake day: October 4th, 2023

help-circle

  • In both series I think the first book is the best

    Yeah, the Hitchhiker’s Guide series starts out with dark humor, sure, but…it’s still irreverent and kind of done in a light-hearted way. But that series gets grimmer and grimmer the further one goes, and I just found myself not enjoying myself by the end of it. I don’t understand people who love the whole series. I just found it wearing to read towards the end.

    That and the Dune series are my own top “love the first book, but the series goes downhill over the course of the series” series.

    EDIT: Calvin and Hobbes did the same thing. I love a ton of the Calvin and Hobbes cartoons, but man, in his last few books-worth of material, Bill Watterson was not happy and his cartoons were just cynical and unhappy too.


  • Hmm.

    I think maybe Illusion, by Paula Volsky, which I read many years back. She does books that basically take real-life revolutions and then do historical fantasy in a lowish-magic world version of them. Illusion was the French Revolution.

    I haven’t read it in ages, but I decided that it was my favorite novel at some point, and never really had another come along that I think quite replaced it. I don’t know for sure if I’d rate it as highly now, but I remember being absolutely entranced with it. She’s got other books that do similar things for other revolutions, but that was my favorite.

    George R. R. Martin’s fantasy stuff is also low-magic, and I like it. Think it might have been the last time I read any fantasy.

    I’ve probably read Snow Crash, cyberpunk by Neal Stephenson, the most over the years. But while it has a lot that I like, it’s also got some pacing issues, albeit not as severe as some of his other novels — I think that stepping between action scenes and someone talking about ancient Sumerian linguistics that Stephenson researched is kind of jarring.

    My fiction book reading has really fallen off over the years.

    My favorite comic book…I don’t know about a single comic book. I guess the Sandman graphic novel series, by Neal Gaiman.

    I’ve done much more nonfiction reading in recent years. For nonfiction…I don’t know. That seems so dependent on what it is that you want to find out about. I think I’d have a hard time ranking books by purely content-independent aspects. How do I compare a book on Native American primitive looms and weaving techniques to a book on Cold War-era submarine designs?



  • I agree that it’s less-critical than it was at one point. Any modern filesystem, including ext4 and btrfs, isn’t at risk of filesystem-level corruption, and a DBMS like PostgreSQL or MySQL should handle it at an application level. That being said, there is still other software out there that may take issue with being interrupted. Doing an apt upgrade is not guaranteed to handle power loss cleanly, for example. And I’m not too sanguine about hardware not being bricked if I lose power during an fwupd updating the firmware on attached hardware. Maybe a given piece of hardware has a safe, atomic upgrade procedure…and maybe it doesn’t.

    That does also mean, if there’s no power backup at all, that one won’t have the system available for the duration of the outage. That may be no big deal, or might be a real pain.


  • Yeah, I listed it as one possibility, maybe the best I can think of, but also why I’ve got some issues with that route, why it wouldn’t be my preferred route. Maybe it is the best generally available right now.

    The “just use a UPS plus a second system” route makes a lot of sense with diesel generator systems, because there the hardware physically cannot come up to speed in time. A generator cannot start in 10ms, so you need a flywheel or battery or some other kind of energy-storage system in place to bridge the gap…but that shouldn’t be a fundamental constraint on those home large-battery backup systems. They don’t have to be equipped with an inverter able to come online in 10ms…but they could. In the generator scenario, it’s simply not an option.

    I’d like to, if possible, have the computer have a “unified” view of all of the backing storage systems. In the generator case, the “time remaining” is a function of the fuel in the tank, and I’m pretty sure that it’s not uncommon for someone to be able to have some kind of secondary storage that couldn’t be measured; I remember reading about a New Orleans employee in Hurricane Katrina that stayed behind to keep the datacenter functioning mostly hauling drums of diesel up the stairs to the generator. But that’s not really a fundamental issue with those battery backup systems, not unless someone is planning on hauling more batteries in.

    If one gets a UPS and then backs it with a battery backup system, then there are two sets of batteries — one often lead-acid, with a shorter lifespan — and multiple inverters and battery charge controllers in multiple layers in the system. That’s not the end of the world, a “throw some extra money at it” issue, but one is having to get redundant hardware.


  • One thing I’ve always found funny though is that if we have AI’s that can replace programmers then don’t we also, by definition, have AI’s that can create AI’s?

    Well, first, I wouldn’t say that existing generative AIs can replace a programmer (or even do that great a job at assisting one, increasing productivity). I do think that there’s potentially an unexplored role for creating an LLM-based “grammar checker” for code, which may be a larger win in doing debugging work that would normally require a human.

    But, okay, set that aside – let’s say that we imagine that we have an AI in 2025 that can serve as a drop-in replacement for a programmer, can translate plain English instructions into a computer program as well as a programmer could. That still doesn’t get us to the technological singularity, because that probably involves also doing a lot of research work. Like, you can find plenty of programmers who can write software…but so far, none of them have made a self-improving AGI. :-)


  • tal@lemmy.todayOPtoSelfhosted@lemmy.worldWhat are people doing for home server UPS in 2025?
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    edit-2
    11 hours ago

    I’ll add one other point that might affect people running low-power servers, which I believe some people here are running for low-compute-load stuff like home automation: my past experience is that low-end, low power computers often have (inexpensive) power supplies that are especially intolerant of wall power issues. I have had multiple consumer broadband routers and switches that have gotten into a wonky, manual-reboot-requiring state after brownouts or power loss, even when other computers in the house continued to function without issue. I’d guess that those might be particularly-sensitive to a longer delay in changing over to a backup power source. I would guess that Raspberry Pi-class machines might have power supplies vulnerable to this. I suppose that for devices with standard barrel connectors and voltage levels, one could probably find a more-expensive power supply that can handle dirtier power.

    If you run some form of backup power system that powers them, have you had issues with Raspberry Pis or consumer internet routers after power outages?




  • It concludes that “estimates about the magnitude of labor market impacts (by AI) may be well above what might actually materialize.”

    I can believe that in the short term. Especially if someone is raising money for Product X, they have a strong incentive to say “oh, yeah, we can totally have a product that’s a drop-in replacement for Job Y in 2-3 years”.

    So, they’re highlighting something like this:

    A 2024 study by the Indian Institute of Management, Ahmedabad, on labor force perception of AI (“IIMA Study”) states that 68% of the surveyed white-collar employees expect AI to partially or fully automate their jobs in the next five years.

    I think that it is fair to say that there is very probably a combination of people over-predicting generalized capabilities of existing systems based on what they see where existing systems can work well in very limited roles. Probably also underpredicting the fact that there are probably going to be hurdles that we crash into that we don’t yet know about.

    But I am much more skeptical about people underestimating impact in the long term. Those systems are probably going to be considerably more-sophisticated and may work rather differently than the current generative AI things. Think about how transformative industrialization was, when we moved to having machines fueled by fossil fuels doing a lot of what had to be manual labor done by humans in the past. The vast majority of things that people were doing pre-industrialization aren’t done by people anymore.

    https://en.wikipedia.org/wiki/History_of_agriculture_in_the_United_States

    In Colonial America, agriculture was the primary livelihood for 90% of the population

    https://www.agriculturelore.com/what-percentage-of-americans-work-in-agriculture/

    The number of Americans employed in agriculture has been declining for many years. In 1900, 41% of the workforce was employed in agriculture. In 2012, that number had fallen to just 1%.

    Basically, the jobs that 90% of the population had were in some way replaced.

    That being said, I also think that if you have AI that can do human-level tasks across-the-board, it’s going to change society a great deal. I think that the things to think about are probably broader than just employment; like, I’d be thinking about things like major shifts in how society is structured, or dramatic changes in the military balance of power. Hell, even merely take the earlier example: if you were talking to someone in 1776 about how the US would change by the time it reached 2025, if they got tunnel vision and focused on the fact that about 90% of jobs would be replaced in that period, you’d probably say that that’s a relatively-small facet of the changes that happened. The way people live, what they do, how society is structured, all that, is quite different from the way it had been for the preceeding ~12k years, the structures that human society had developed since agriculture was introduced.




  • If you look at the article, it was only ever possible to do local processing with certain devices and only in English. I assume that those are the ones with enough compute capacity to do local processing, which probably made them cost more, and that the hardware probably isn’t capable of running whatever models Amazon’s running remotely.

    I think that there’s a broader problem than Amazon and voice recognition for people who want self-hosted stuff. That is, throwing loads of parallel hardware at something isn’t cheap. It’s worse if you stick it on every device. Companies — even aside from not wanting someone to pirate their model running on the device — are going to have a hard time selling devices with big, costly, power-hungry parallel compute processors.

    What they can take advantage of is that for a lot of tasks, the compute demand is only intermittent. So if you buy a parallel compute card, the cost can be spread over many users.

    I have a fancy GPU that I got to run LLM stuff that ran about $1000. Say I’m doing AI image generation with it 3% of the time. It’d be possible to do that compute on a shared system off in the Internet, and my actual hardware costs would be about $33. That’s a heckofa big improvement.

    And the situation that they’re dealing with is even larger, since there might be multiple devices in a household that want to do parallel-compute-requiring tasks. So now you’re talking about maybe $1k in hardware for each of them, not to mention the supporting hardware like a beefy power supply.

    This isn’t specific to Amazon. Like, this is true of all devices that want to take advantage of heavyweight parallel compute.

    I think that one thing that it might be worth considering for the self-hosted world is the creation of a hardened network parallel compute node that exposes its services over the network. So, in a scenario like that, you would have one (well, or more, but could just have one) device that provides generic parallel compute services. Then your smaller, weaker, lower-power devices — phones, Alexa-type speakers, whatever — make use of it over your network, using a generic API. There are some issues that come with this. It needs to be hardened, can’t leak information from one device to another. Some tasks require storing a lot of state — like, AI image generation requires uploading a large model, and you want to cache that. If you have, say, two parallel compute cards/servers, you want to use them intelligently, keep the model loaded on one of them insofar as is reasonable, to avoid needing to reload it. Some devices are very latency-sensitive — like voice recognition — and some, like image generation, are amenable to batch use, so some kind of priority system is probably warranted. So there are some technical problems to solve.

    But otherwise, the only real option for heavy parallel compute is going to be sending your data out to the cloud. And even if you don’t care about the privacy implications or the possibility of a company going under, as I saw some home automation person once point out, you don’t want your light switches to stop working just because your Internet connection is out.

    Having per-household self-hosted parallel compute on one node is still probably more-costly than sharing parallel compute among users. But it’s cheaper than putting parallel compute on every device.

    Linux has some highly-isolated computing environments like seccomp that might be appropriate for implementing the compute portion of such a server, though I don’t know whether it’s too-restrictive to permit running parallel compute tasks.

    In such a scenario, you’d have a “household parallel compute server”, in much the way that one might have a “household music player” hooked up to a house-wide speaker system running something like mpd or a “household media server” providing storage of media, or suchlike.



  • The nerds lost the internet.

    I mean, there wasn’t a shift in control or anything. This is just part of the business plan.

    Reddit, like many B2C online services, intentionally operated at a loss for years in order to grow.

    1. Get capital.

    2. Spend capital providing a service that is as appealing as possible, even if you have to lose money to do it. This builds your userbase. This is especially important with services that experience network effect, like social media, since the value of the network rises with the square of the number of users. This is the “growth phase” of the company.

    3. At some point, either capital becomes unavailable, too expensive (e.g. in the interest rate hikes after COVID-19), or you saturate available markets. At that point, you shift into the “monetization phrase” – you have to generate a return using that userbase you built. Could be ads, charging for the service or some premium features, harvesting data, whatever. Because interest rates shot up after COVID-19, a lot of Internet service companies were forced to rapidly transition into their monetization phase at the same time. But point is, your concern isn’t growing the service as much as it is making a return then, and it’s virtually certain that in some way, the service will become less-desirable, since the service is shifting to having a priority on making a return above being desirable to draw new users. That transition from growth to monetization phase is what Cory Doctrow called “enshittification”, though some people around here kind of misuse the term to refer to any change that they don’t like.

    Investors were not going to simply shovel money into Reddit forever with no return — they always did so expecting some kind of return, even if it took a long time to build to that return. I hoped that that changes when they moved into a monetization phase were changes that I could live with. In the end, they weren’t — I wasn’t willing to give up third party clients, if there was an alternative. But it’s possible that they could have come up with some sort of monetization that I was okay with.

    If you don’t mean the transition from growth to monetization at Reddit, but the creation of Reddit at all…

    The main predecessor to Reddit was, I suppose, Usenet. That was a federated system, and while it wasn’t grown with that kind of business model, it wasn’t free — but typically it was a service that was bundled into the bill when one got service from an ISP, along with email and sometimes a small amount of webhosting. Over time, ISPs that provided bundled Usenet service stopped providing it (since it increased their subscription fees and made their actual Internet service uncompetitive for people who didn’t use Usenet service), and because so many people used it to pirate large binaries, the costs of running a full-feed Usenet server increased. Users today that use Usenet typically pay a subscription to some commercial service. You can still get Usenet service now, if that’s what you want – but you’ll pay for it a la carte rather than having it bundled, and the last time I was trying to use it for actual discussion, it had real problems with spam.