• 0 Posts
  • 164 Comments
Joined 2 years ago
cake
Cake day: June 16th, 2023

help-circle


  • I could see someone being frustrated that from a third party, it looks like you are not responding to a reply and that person could spin that as a concession that they were right

    I could see a compromise, where a direct reply from such a blocked/muted person is allowed, but indicated so that people are aware a response could not have been done.


  • I’m not a huge fan of anyone meeting this sort of end.

    A little mixed feelings because on the other hand I have a bit of an appreciation of the context of a man that has openly consistently declared gun deaths as somewhat acceptable getting killed by a gun.

    But ultimately, I would have rather seen him get his ass kicked or a few handgun bullets to the vest to give him some appropriate fear and consequences without him becoming a martyr. Ideally i would have liked him to just get scared into not actively trying to troll people the way he did. Further for it to be clear from the very first moment that it was MAGA infighting, to avoid the incident increasing an already strained division and maybe show the movement the dangerous game they are playing.





  • If, hypothetically, the code had the same efficacy and quality as human code, then it would be much cheaper and faster. Even if it was actually a little bit worse, it still would be amazingly useful.

    My dishwasher sometimes doesn’t fully clean everything, it’s not as strong as a guarantee as doing it myself. I still use it because despite the lower quality wash that requires some spot washing, I still come out ahead.

    Now this was hypothetical, LLM generated code is damn near useless for my usage, despite assumptions it would do a bit more. But if it did generate code that matched the request with comparable risk of bugs compared to doing it myself, I’d absolutely be using it. I suppose with the caveat that I have to consider the code within my ability to actual diagnose problems too…


  • Based on my experience, I’m skeptical someone that seemingly delegates their reasoning to an LLM were really good engineers in the first place.

    Whenever I’ve tried, it’s been so useless that I can’t really develop a reflex, since it would have to actually help for me to get used to just letting it do it’s thing.

    Meanwhile the people who are very bullish who are ostensibly the good engineers that I’ve worked with are the people who became pet engineers of executives and basically have long succeeded by sounding smart to those executives rather than doing anything or even providing concrete technical leadership. They are more like having something akin to Gartner on staff, except without even the data that at least Gartner actually gathers, even as Gartner is a useless entity with respect to actual guidance.


  • Feel like I’m being gaslit, the more I try to use them the less confident I become in their utility.

    I will confess it did help me cut through some particularly obtuse documentation to provide a rough example of what I wanted to do. It still totally screwed up the actual suggestion, but it at least helped me figure out some good keywords to dig into.

    It occasionally saves me some tedium when I have to do something mindlessly tedious, but doing that usually also inflicts constant misguesses about what next.

    But even when doing easy stuff they are falling over constantly, and that hasn’t been significantly improving.





  • jj4211@lemmy.worldtoTechnology@lemmy.worldWhat If There’s No AGI?
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    14 days ago

    Pretty much this. LLMs came out of left field going from morning to what it is more really quickly.

    If expect the same of AGI, not correlated to who spent the most or is best at LLM. It might happen decades from now or in the next couple of months. It’s a breakthrough that is just going to come out of left field when it happens.






  • Well, the article is covering the disclaimer, which is vague enough to mean pretty much whatever.

    I can buy that he is taking it to the level of if it can’t directly be used for the stuff in the disclaimer, well, what could it be used for then? Crafting formulas seems to be a possibility, especially since the spreadsheet formula language is kind of esoteric and clumsy to read and write. It ‘should’ be up an LLM alley, a relatively limited grammar that’s kind of a pain for a human to work with, but easy enough to get right in theory for an LLM. LLM is sometimes useful for script/programming but the vocabulary and complexity can easily get away from it, but excel formula are less likely to have programming level complexity or arbitrarily many methods to invoke. You of course have to eyeball the formula to see if it looks right, and if it does screw up the cell parameters, that might be a hard thing to catch by eyeballing for most people.