• 0 Posts
  • 33 Comments
Joined 2 years ago
cake
Cake day: July 2nd, 2023

help-circle

  • Tbf, can’t the other party mess it up with signal too?

    Yes, but this is where threat modeling comes into play. Grossly simplified, developing a threat model means to assess what sort of attackers you reasonably expect to make an attempt on you. For some people, their greatest concern is their conservative parents finding out that they’re on birth control. For others, they might be a journalist trying to maintain confidentiality of an informant from a rogue sheriff’s department in rural America. Yet others face the risk of a nation-state’s intelligence service trying to find their location while in exile.

    For each of these users, they have different potential attackers. And Signal is well suited for the first two, and only alright against the third. After all, if the CIA or Mossad is following someone around IRL, there are other ways to crack their communications.

    What Signal specifically offers is confidentiality in transit, meaning that all ISPs, WiFi networks, CDNs, VPNs, script skiddies with Wireshark, and network admins in the path of a Signal convo cannot see the contents of those messages.

    Can the messages be captured at the endpoints? Yes! Someone could be standing right behind you, taking photos of your screen. Can the size or metadata of each message reveal the type of message (eg text, photo, video)? Yes, but that’s akin to feeling the shape of an envelope. Only through additional context can the contents be known (eg a parcel in the shape of a guitar case).

    Signal also benefits from the network effect, because someone trying to get away from an abusive SO has plausible deniability if they download Signal on their phone (“all my friends are on Signal” or “the doctor said it’s more secure than email”). Or a whistleblower can send a message to a journalist that included their Signal username in a printed newspaper. The best place to hide a tree is in a forest. We protect us.

    My main issue for signal is (mostly iPhone users) download it “just for protests” (ffs) and then delete it, but don’t relinquish their acct, so when I text them using signal it dies in limbo as they either deleted the app or never check it and don’t allow notifs

    Alas, this is an issue with all messaging apps, if people delete the app without closing their account. I’m not sure if there’s anything Signal can do about this, but the base guarantees still hold: either the message is securely delivered to their app, or it never gets seen. But the confidentiality should always be maintained.

    I’m glossing over a lot of cryptographic guarantees, but for one-to-one or small-group private messaging, Signal is the best mainstream app at the moment. For secure group messaging, like organizing hundreds of people for a protest, that is still up for grabs, because even if an app was 100% secure, any one of those persons can leak the message to an attacker. More participants means more potential for leaks.



  • Let me make sure I understand everything correctly. You have an OpenWRT router which terminates a Wireguard tunnel, which your phone will connect to from somewhere on the Internet. When the Wireguard tunnel lands within the router in the new subnet 192.168.2 0/24, you have iptable rules that will:

    • Reject all packets on the INPUT chain (from subnet to OpenWRT)
    • Reject all packets on the OUTPUT chain (from OpenWRT to subnet)
    • Route packets from phone to service on TCP port 8080, on the FORWARD chain
    • Allow established connections, on the FORWARD chain
    • Reject all other packets on the FORWARD chain

    So far, this seems alright. But where does the service run? Is it on your LAN subnet or the isolated 192.168.2.0/24 subnet? The diagram you included suggests that the service runs on an existing machine on your LAN, so that would imply that the router must also do address translation from the isolated subnet to your LAN subnet.

    That’s doable, but ideally the service would be homed onto the isolated subnet. But perhaps I misunderstood part of the configuration.


  • litchralee@sh.itjust.workstoSelfhosted@lemmy.worldSelf hosting Signal server
    link
    fedilink
    English
    arrow-up
    16
    arrow-down
    1
    ·
    edit-2
    2 months ago

    This doesn’t answer OP’s question, but is more of a PSA for anyone that seeks to self-host the backend of an E2EE messaging app: only proceed if you’re willing and able to upkeep your end of the bargain to your users. In the case of Signal, the server cannot decrypt messages when they’re relayed. But this doesn’t mean we can totally ignore where the server is physically located, nor how users connect to it.

    As Soatok rightly wrote, the legal jurisdiction of the Signal servers is almost entirely irrelevant when the security model is premised on cryptographic keys that only the end devices have. But also:

    They [attackers] can surely learn metadata (message length, if padding isn’t used; time of transmission; sender/recipients). Metadata resistance isn’t a goal of any of the mainstream private messaging solutions, and generally builds atop the Tor network. This is why a threat model is important to the previous section.

    So if you’re going to be self-hosting from a country where superinjunctions exist or the right against unreasonable searches is being eroded, consider that well before an agent with a wiretap warrant demands that you attach a logger for “suspicious” IP addresses.

    If you do host your Signal server and it’s only accessible through Tor, this is certainly an improvement. But still, you must adequately inform your users about what they’re getting into, because even Tor is not fully resistant to deanonymization, and then by the very nature of using a non-standard Signal server, your users would be under immediate suspicion and subject to IRL side-channel attacks.

    I don’t disagree with the idea of wanting to self-host something which is presently centralized. But also recognize that the network effect with Signal is the same as with Tor: more people using it for mundane, everyday purposes provides “herd immunity” to the most vulnerable users. Best place to hide a tree is in a forest, after all.

    If you do proceed, don’t oversell what you cannot provide, and make sure your users are fully abreast of this arrangement and they fully consent. This is not targeted at OP, but anyone that hasn’t considered the things above needs to pause before proceeding.



  • A Nintendo Wii would also work, as exemplified by this blog running on a NetBSD Wii.

    But in all seriousness, the original comment has a point: using a mobile phone as a server is possible but also wastes a lot of the included hardware, like the cellular baseband, the touchscreen, and the voice and Bluetooth capabilities. Selling the phones and using the proceeds to purchase a used NUC or an SFF PC would give you more avenues to expand, in addition to just being plain easier to set up, since it would have USB ports, to name a few luxuries.


  • From my limited experience with PoE switches, how much power being drawn in relation to how much the switch can supply has a notable impact on efficiency. Specifically, when only one or two ports on a 48-port switch are delivering PoE, the increased AC power drawn from the wall is disproportionately high. Hence, any setup where you’re using more of the PoE switch’s potential power tends to increase overall efficiency.

    My guess is that it has to do with efficiency curves that are only reasonable when heavily loaded for enterprise customers. In any case, if either of those two candidate switches meet your needs today and with some breathing room, both should be fine. I would tend to lean towards Netgear before TP-Link though, out of personal preference.


  • litchralee@sh.itjust.workstoSelfhosted@lemmy.worldWifi Portal
    link
    fedilink
    English
    arrow-up
    17
    arrow-down
    1
    ·
    5 months ago

    But how do they connect to your network in order to access this web app? If the WiFi network credentials are needed to access the network that has the QR code for the network credentials, this sounds like a Catch 22.

    Also, is a QR code useful if the web app is opened on the very phone needing the credentials? Perhaps other phones are different, but my smartphone is unable to scan a QR code that is on the display.



  • Typically, business-oriented vendors will list the hardware that they’ve thoroughly tested and will warranty for operation with their product. The lack of testing larger disk sizes does not necessarily mean anything larger than 1 TB is locked out or technically infeasible. It just means the vendor won’t offer to help if it doesn’t work.

    That said, in the enterprise storage space where disks are densely packed into disk shelves with monstrous SAS or NVMeoF configurations, vendor specific drives are not unheard of. But to possess hardware that even remotely has that possibility kinda means that sort of thing would be readily apparent.

    To be clear, the mobo has a built-in HBA which you’re using, or you’re adding a separate HBA over PCIe that you already have? If the latter, I can’t see how the mobo can dictate what the HBA supports. And if it’s in IT mode, then the OS is mostly in control of addressing the drive.

    The short answer is: you’ll have to try it and find out. And when you do, let us know what you find!


  • Congrats on the acquisition!

    DL380 G9

    Does this machine have its iLO license? If so, you’re in for a treat, if you’ve never used IPMI or similar out-of-band server management. Starting as a glorified KVM, it then has full power control authority (power on/off, soft reset, hard reset), either a separate or shared Ethernet connection, virtual CD and USB, SNMP reporting, and other whiz-bang features. Used correctly, you might never have to physically touch the machine after installation, except for parts replacement.

    What is your go-to place to source drive caddies or additional bays if needed?

    When my Dell m1000e was missing two caddies, I thought about buying a few spares on eBay. But ultimately, I just 3d printed a few and that worked fine.

    Finally, server racks are absurdly expensive of course. Any suggestions on DIY’s for a rack would be appreciated.

    I built my rack using rails from Penn-Elcom, as I had a very narrow space I wanted to fit my machines. Building an open-frame 4-post rack is almost like putting a Lego set together, but you will have to take care to make sure it doesn’t become a parallelogram. That is, don’t impart a sideways load.

    Above all, resist the urge to get by with a two-post rack. This will almost certainly end in misery, considering that enterprise servers are not lightweight.


  • I agree with this comment, and would suggest going with the first solution (NAT loopback, aka NAT hairpin) rather than split-horizon DNS. I say this even though I have a strong dislike of NAT (and would prefer to see networks using flat IPv6 addresses, but that’s a different topic). It should also be fairly quick to configure the hairpin on your router.

    Specifically, problems arise when using DNS split-horizon where the same hostname might resolve to two different results, depending on which DNS nameserver is used. This is distinct from some corporate-esque DNS nameservers that refuse to answer for external requests but provide an answer to internal queries. Whereas by having no “single source of truth” (SSOT) for what a hostname should resolve to, this will inevitably make future debugging harder. And that’s on top of debugging NAT issues.

    Plus, DNS isn’t a security feature unto itself: successful resolution of internal hostnames shouldn’t increase security exposure, since a competent firewall would block access. Some might suggest that DNS queries can reveal internal addresses to an attacker, but that’s the same faulty argument that suggests ICMP pings should be blocked; it shouldn’t.

    To be clear, ad-blocking DNS servers don’t suffer from the ails of split-horizon described above, because they’re intentionally declining to give a DNS response for ad-hosting hostnames, rather than giving a different response. But even if they did, one could argue the point of ad-blocking is to block adware, so we don’t really care if SSOT is diminished for those hostnames.


  • which means DNS entries in a domain, and access from the internet

    The latter is not a requirement at all. Plenty of people have publicly-issued TLS certs for domain named services that aren’t exposed to the public internet, or aren’t using HTTP(s). If using LetsEncrypt, the DNS-01 challenge method would suffice, or can even issue a wildcard certificate for subdomains, so additional certificate issuance is not required.

    If after acquiring a domain, said domain can be pointed to one of many free nameservers that provide an API which can be updated from an ACME script for automatic renewal of the LetsEncrypt certificate using DNS-01. dns.he.net is one such example.

    OP has been given a variety of options, each of which come with their own tradeoffs. But public access to Jellyfin just to get a public cert is not a necessary tradeoff that OP needs to make.


  • Not “insecure” in the sense that they’re shoddy with their encryption, no. But being free could possibly mean their incentives are not necessarily aligned with that of the free users.

    In security speak, the CIA triad stands for Confidentiality, Integrity, and Availability. I’m not going to unduly impugn Proton VPN’s credentials on data confidentiality and data integrity, but availability can be a legit security concern.

    For example, if push comes to shove and Proton VPN is hit with a DDoS attack, would free tier users be the first to be disconnected to free up capacity? Alternatively, suppose the price for IP transit shoots through the roof due to weird global economics and ProtonVPN has to throttle the free tier to 10 Mbps. All VPN operators share these possibilities, but however well-meaning Proton VPN and the non-profit behind them are, economic factors can force changes that aren’t great for the free users.

    Now, the obv solution at such a time would be to then switch to being a paid customer. And that might be fine for lots of customers, if that ever comes to pass. But Murphy’s Law makes it a habit that this scenario would play out when users are least able to prepare for it, possibly leading to some amount of unavailability.

    So yes, a holistic analysis of failure points is precisely what proper security calls for. Proton VPN free tier may very well be inappropriate. But whether it rises to a serious concern or just warrants an “FYI”, that will vary based on individual circumstances.


  • Don’t. OP already said in the previous post that they only need Jellyfin access within their home. The Principle of Least Privilege tilts in favor of keeping Jellyfin off the public Internet. Even if Jellyfin were flawless – and no program is – the only benefit that accrues to OP is that the free tier of ProtonVPN can access Jellyfin.

    Opening a large attack surface for such a modest benefit is letting the tail wag the dog. It’s adding a kludge to workaround a different kludge, the latter being ProtonVPN’s very weird paid tier.


  • I previously proffered some information in the first thread.

    But there’s something I wish to clarify about self-signed certificates, for the benefit of everyone. Irrespective of whichever certificate store that an app uses – either its own or the one maintained by the OS – the CA Browser Forum, which maintains the standards for public certificates, prohibits issuance of TLS certificates for reserved IPv4 or IPv6 addresses. See Section 4.2.2.

    This is because those addresses will resolve to different machines on different networks. Whereas a certificate for a global-scope IP address is fine because it should resolve to the same destination. If certificate authorities won’t issue certs for private IP addresses, there’s a good chance that apps won’t tolerate such certs either. Nor should they, for precisely the reason given above.

    A proper self-signed cert – either for a domain name or a global-scope IP address – does not create any MITM issues as long as the certificate was manually confirmed the first time and added to the trust store, either in-app or in the OS. Thereafter, only a bona fide MITM attack would raise an alarm, the same as if a MITM attacker tries to impersonate any other domain name. SSH is the most similar, where trust-on-first-connection is the norm, not the outlier.

    There are safe ways to use self-signed certificate. People should not discard that option so wontonly.


  • Physical wire tapping would be mostly mitigated by setting every port on the switch to be a physical vlan

    Can you clarify on this point? I’m not sure what a “physical VLAN” would be. Is that like only handling tagged traffic?

    I’m otherwise in total agreement that the threat model is certainly not typical. But I can imagine a scenario like a college dorm where the L2 network is owned by a university, and thus considered “hostile” to OP somehow. OP presented their requirements, so good advice has to at least try to come up with solutions within those parameters.


  • I had a small typo where “untrusted” was written as “I trusted”. That said, I think we’re suggesting different strategies to address OP’s quandary, and either (or both!) would be valid.

    My suggestion was for encrypted L3 tunneling between end-devices which are trusted, so that even an untrustworthy L2 network would present no issue. With technologies like WireGuard, this isn’t too hard to do for mobile phone clients, and it’s well supported for Linux clients.

    If I understand your suggestion, it is to improve the LAN so that it can be trusted, by way of segmentation into VLANs which separate the trusted devices from the rest. The problem I see with this is that per-port VLANs alone do not address the possibility of physical wire-tapping, which I presumed was why OP does not trust their own LAN. Perhaps they’re running cable through a space shared with other tenants, or something like that. VLANs help, but MACsec encryption on the wire paired with 802.1x device certificate for authentication is the gold standard for L2 security.

    But seeing as that’s primarily the domain of enterprise switches, the L3 solution in software using WireGuard or other tunneling technologies seems more reasonable. That said, the principle of Defense In Depth means both should be considered.