• 0 Posts
  • 492 Comments
Joined 1 year ago
cake
Cake day: July 15th, 2024

help-circle


  • Unless general-purpose Internet services will be banned.

    Have you noticed that laws about providing data on bad-bad criminals yadda-yadda implicitly ban everything not allowing to get that data?

    And have you noticed that Russian bans of things that can’t be filtered not only implicitly allow whitelist ban of protocols and services, but are also not very far from what EU and US lawmakers want?

    In any situation, if what powerful guys are doing seems to promise good things for you, first consider the possibility that you don’t understand what they are doing.


  • Gemini is just a web replacement protocol. With basic things we remember from olden days Web, but with everything non-essential removed, for a client to be doable in a couple of days. I have my own Gemini viewer, LOL.

    This for me seems a completely different application from torrents.

    I was dreaming for a thing similar to torrent trackers for aggregating storage and computation and indexing and search, with search and aggregation and other services’ responses being structured and standardized, and cryptographic identities, and some kind of market services to sell and buy storage and computation in unified and pooled, but transparent way (scripted by buyer\seller), similar to MMORPG markets, with the representation (what is a siloed service in modern web) being on the client native application, and those services allowing to build any kind of client-server huge system on them, that being global. But that’s more of a global Facebook\Usenet\whatever, a killer of platforms. Their infrastructure is internal, while their representation is public on the Internet. I want to make infrastructure public on the Internet, and representation client-side, sharing it for many kinds of applications. Adding another layer to the OSI model, so to say, between transport and application layer.

    For this application:

    I think you could have some kind of Kademlia-based p2p with groups voluntarily joined (involving very huge groups) where nodes store replicas of partitions of group common data based on their pseudo-random identifiers and/or some kind of ring built from those identifiers, to balance storage and resilience. If a group has a creator, then you can have replication factor propagated signed by them, and membership too signed by them.

    But if having a creator (even with cryptographically delegated decisions) and propagating changes by them is not ok, then maybe just using whole data hash, or it’s bittorrent-like info tree hash, as namespace with peers freely joining it can do.

    Then it may be better to partition not by parts of the whole piece, but by info tree? I guess making it exactly bittorrent-like is not a good idea, rather some kind of block tree, like for a filesystem, and a separate piece of information to lookup which file is in which blocks. If we are doing directory structure.

    Then, with freely joining it, there’s no need in any owners or replication factors, I guess just pseudorandom distribution of hashes will do, and each node storing first partitions closest to its hash.

    Now thinking about it, such a system would be not that different from bittorrent and can even be interoperable with it.

    There’s the issue of updates, yes, hence I’ve started with groups having hierarchy of creators, who can make or accept those updates. Having that and the ability to gradually store one group’s data to another group, it should be possible to do forks of a certain state. But that line of thought makes reusing bittorrent only possible for part of the system.

    The whole database is guaranteed to be more than a normal HDD (1 TB? I dunno). Absolutely guaranteed, no doubt at all. 1 TB (for example) would be someone’s collection of favorite stuff, and not too rich one.


  • I blame the idea of the 00s and 10s that there should be some “Zen” in computer UIs and that “Zen” is doing things wrong with the arrogant tone of “you don’t understand it”. Associated with Steve Jobs, but TBH Google as well.

    And also another idea of “you dummy talking about ergonomics can’t be smarter than this big respectable corporation popping out stylish unusable bullshit”.

    So -

    1. pretense of wisdom and taste, under which crowd fashion is masked,
    2. almost aggressive preference for authority over people actually having maybe some wisdom and taste due to being interested in that,
    3. blind trust into whatever tech authority you chose for yourself, because, if you remember, in the 00s it was still perceived as if all people working in anything connected to computers were as cool as aerospace engineers or naval engineers, some kind of elite, including those making user applications,
    4. objective flaw (or upside) of the old normal UIs - they are boring, that’s why UIs in video games and in fashionable chat applications (like ICQ and Skype), not talking about video and audio players, were non-standard like always, I think the solution would be in per-application theming, not in breaking paradigms, again, like with ICQ and old Skype and video games, I prefer it when boredom is thought with different applications having different icons and colors, but the UI paradigm remains the same, I think there was a themed IE called LOTR browser which I used (ok, not really, I used Opera) to complement ICQ, QuickTime player and BitComet, all mentioned had standard paradigm and non-standard look.

  • It’s not a glitch.

    People have spent billions to build systems where such dissemination of crowd emotion is the main difference from the real web (what was on geocities or even LJ, and a bit of that exists in Telegram, because it’s a Russian honeypot to collect intelligence, and Russia could care a bit less about keeping the line that American social media corps, in its effort to make the honeypot actually attractive to use).

    Then spent billions to advertise them. Billions to kill competition.

    Then they’ve lost billions from that, and yet doubled down on it.

    That just doesn’t happen by accident, it’s a whole era of humanity’s history now. Like 20s-50s (the “bad” kind of change, with goosestepping, cult of strong people, attempts to save empires, preparations for a nuclear war, all that) and 60s-90s (the “good” kind of change, with space race, hippies in the west, Soviet official ideology being peace and unification of humanity - BTW, it’s funny how the western politicians of that time freeloaded on that, never denying such a goal, but also never accepting it, thus getting the good parts without the hard ones) and then what we have.


  • Well. Not very different from “opening up” to hashish fumes or Tarot cards or Chinese fortune cookies.

    And robotic therapists are a common enough component of classical science fiction, not even all dystopian.

    For the record, I agree that the results suck. Everything around us is falling apart, have you noticed?

    You can do more with less with 1% deadly error rate, and you can do much more with much less with 10% deadly error rate. Military and economic logic says that the latter wins . Which means the latter wins evolution.

    And we (that is, our parents and grandparents) have built a nice world intended for low error rates, because they didn’t think such a contradiction between efficiency and correctness will happen, or they thought that it’s our job to root out our time’s weeds, loosely quoting Tolkien, and they have rooted out theirs as well as they could.

    Which means that nice world doesn’t survive evolution.


  • It’s fascinating to see this find new pastures in the new world. As a proud Russian citizen.

    Some day you’ll remember with nostalgie those years of the ruling party actually caring to win elections.

    Jokes aside, it’s easier to cheat now because it’s easier to do everything, and that’s because of the Internet and modern computing systems.

    You can’t unmince minced meat back.

    But you can apply the same change in a different direction and see that today direct non-anonymous democracy is actually plausible, if it’s instituted, for big countries. 100 years ago it simply wasn’t possible. Now it is.

    Or that today Soviet system (as in Soviet democracy and not totalitarian state capitalism) is actually possible to build. When they were trying, they couldn’t, they didn’t possess the means.

    And that both these things are actually what these people have done to us, but inverted. Our “direct vote” is the data they collect about us to classify and predict us for control. Our “Soviets” are that classification, and our “central planning” is those predictions and control.

    They’ve done all this, just directed for their own interest. So maybe one can do the opposite.



  • Подтвержденных сцуко самим протоколом Телеграмки.

    Ну и, эм, что у силовиков есть доступ к перепискам, уже возникало в куче уголовных дел. Они не слишком прячутся. И это, видимо, были не локальные трояны.

    Т.е. скорее всего доступ со стороны сервера к хранящемуся там.

    А ваццап все-таки E2EE и официальный доступ там к метаданным.



  • And some super advanced LLM powered text compression so you can easily store a copy of 20% of them on your PC to share P2P.

    Nothing can be that advanced and zstd is good enough.

    The idea is cool. With pure p2p exchange being a fallback, and something like trackers in bittorrent being the main center to yield nodes per space (suppose, there’s more than one such archive you’d want to replicate) and per partition (if it’s too big, then maybe it would make sense, but then some of what I wrote further should be reconsidered).

    The problem of torrents and other stuff is that people only store what’s interesting to them.

    If you have to store one humongous archive, and be able to efficiently search it, and avoid losing pieces - then, I think, you need partitioned roughly equal distribution of it over nodes.

    The space of keys (suppose it’s hashes of blocks of the whole) is partitioned by prefix so that a node would store equal amount of blocks of every prefix. And first of all the values closest to the node’s identifier (a bit like in Kademlia) should be stored of those under that space. OK, I’m thinking the first sentence of this paragraph might even be unneeded.

    The data itself should probably be in some supercool format where you don’t need to have it all to decompress only the small part you need, just the beginning with the dictionary and some interval.

    There should also be, as a separate functionality of this system, search by keywords inside intervals, so that search would yield intervals where a certain keyword is encountered. With nodes indexing continuous intervals they can decompress and responding to search requests by those keywords. Ideally a single block should be possible to decompress having the dictionary. I suppose I should do my reading on compression algorithms and formats.

    Probably search function could also involve returning Google-like context. Depending on the space needed.

    Would also need some way to reward contribution, that is, to pay a node owner for storing and serving blocks.





  • HTML 2.0 doesn’t have tables, and tables are not so bad, even org-mode has tables.

    Since HTML 4.01 was a thing when I first saw a website:

    Being able to have buttons is good. Buttons with pictures too.

    And, unlike some people, I liked the idea of framesets. A simple enough websites could have an IRC-like chat frame to the left and the main navigable area to the right.

    And the unholy amount of specific tags is the other side of the coin for not yet using JS and CSS for everything.

    I think an “RHTML” standard as a continuation and maybe simplification of HTML 4.01 (no JS, no CSS, do dynamic things in applets, without Netscape plugins do applets with some new kind of plugins running in a specialized sandboxed VM with JIT) could be useful. Other than this there’s no need in any change at all. It’s perfect. It has all the necessary things for hypertext.




  • I dreamed of a moment when further existence of the society without clear and non-ambiguous personal responsibility will be impossible.

    This is that. In olden days, even if an apparatus made a decision, it still consisted of people. Now it’s possible for the mechanism to not involve people. Despite making garbage decisions, that’s something new, or, to be more precise, something forgotten too long ago - of the times of fortunetelling on birds’ intestines and lambs’ bones for strategic decisions. I suppose in those times such fortunetelling was a mechanism to make a decision random enough, thus avoiding dangerous predictability and traitors affecting decisions.

    The problem with AI or “AI” is that it’s not logically the same as that fortunetelling.

    And also, about personal responsibility … in ancient Greece (and Rome) unfortunate result of such decision-making was not blamed on gods, it was blamed on the leader - their lack of favor with gods, or maybe the fortuneteller - for failing to interpret gods’ will, which, in case they could affect the result, correct. Or sometimes the whole unit, or the whole army, or the whole city-state. So the main trait of any human mechanism, for there to be a responsible party, was present.

    Either a clearly predictable set of people in the company providing the program, or the operator, or the officer making decisions, or all of them, should be responsible when using an “AI” to at least match this old way.