• 1 Post
  • 262 Comments
Joined 2 years ago
cake
Cake day: July 2nd, 2023

help-circle
  • For the blockchain technology at the very core foundation of cryptocurrencies, it’s a reasonable concept that solves a specific challenge (ie no one can change this value unless they have the cryptographic key) and the notion of an indelible or tamper-evident ledger is useful in other fields (eg certificate revocation lists). Using a blockchain as a component is – like all of engineering – about picking the right tool for the job, so I wouldn’t say that having/not having a block chain imparts any sort of opinionation or qualities of good/bad.

    One step above the base technology is the actual application as currency, meaning a representation of economic value, either to store that value (eg gold) or for active trade (eg the €2 Euro coin). All systems of currency require: 1) recognition and general consensus as to their value, and 2) fungibility (ie this $1 note is no different than your $1 note), and 3) the ability to functionally transfer the currency.

    Against that criteria, cryptocurrencies have questionable value, as seen by how volatile the cryptocurrency-to-fiat currency markets are. Observe that the USD or Euro or RMB are used for people’s salaries, denominate their home mortgage loans, for buying and selling crude oil, and so on. Yet basically no one uses cryptocurrency for those tasks, no one writes or accepts business-to-business contracts denominated in cryptocurrency, and only a small handful of sovereign states accept cryptocurrency as valid payment. That’s… not a great outlook for circulating the currency.

    But for fungibility, cryptocurrency clearly meets that test, and probably exceeds the fiat currencies: there’s no such thing as a “torn” Bitcoin note. There are no forgeries of Etherium. It is demonstrable that a unit of cryptocurrency that came from blood-diamond profits is indistinguishable from a unit that was afforded by wages at a fuel station in Kentucky. There are no “marked notes” or “ink packs” when committing cryptocurrency theft, and it’s relatively easy to launder cryptocurrency through thousands of shell accounts/addresses. To launder physical money a thousand times is physically impossible, and is way too suspicious for digitalized fiat currency transfers.

    And that brings us to the ability to actually transfer cryptocurrency. While it’s true that it should only be an extra ledger entry to move funds from one address/account to another, each system has costs buried somewhere. Bitcoin users have to pay the transaction costs, or currencies pegged to other currencies have to “execute” a “smart contract”, with attendant verification costs such as proof-of-work or proof-of-stake. These costs simply don’t exist when I hand a $20 note to a fuel station clerk. Or when my employer sends my wages via ACH electronic payment.

    Observe how cryptocurrency is traded not at shops with goods (eg Walmart) or shops for currency (eg bureau de change at the airport) but mostly only through specialized ATMs or through online exchange websites. The few people who genuinely do use their cryptocurrency wallets to engage transactions are now well in the minority, overshadowed by scammers, confidence/romance tricksters, investment funds with no idea of what they’re doing except to try riding the bandwagon, and individuals who have never traded financial instruments but were convinced by “their buddy’s friend” who said cryptocurrency was a money-making machine.

    To that end, I would say that cryptocurrencies have brought out the worst of financial manipulators, and their allure is creating serious financial perils for everyday people, whether directly as a not-casino casino or to pay a ransomware extortion, or indirectly through the destabilization of the financial system. No one is immune to a breakdown of the financial system, as we all saw in 2008.

    I used to like discussing with people about the technical merits of ledger-based systems, but with the awful repercussions of what they’ve enabled, it’s a struggle to have a coherent conversation without someone suggesting a cryptocurrency use-case. And so I kinda have to throw the whole baby out with the bathwater. Maybe when things quiet down in a few decades, the technology can be revisited from a sober perspective.


  • If I understand the Encryption Markdown page, it appears the public/private key are primarily to protect the data at-rest? But then both keys are stored on the server, although protected by the passphrase for the keys.

    So if the protection boils down to the passphrase, what is the point of having the user upload their own keypair? Are the notes ever exported from the instance while still being encrypted by the user’s keypair?

    Also, why PGP? PGP may be readily available, but it’s definitely not an example of user-friendliness, as exemplified by its lack of broad acceptance by non-tech users or non-government users.

    And then, why RSA? Or are other key algorithms supported as well, like ed25519?


  • Directly answering the question: no, not every country has such a consolidated library that enumerates all the laws of that country. And for reasons, I suspect no such library could ever exist in any real-life country.

    I do like this question, and it warrants further discussion about laws (and rules, and norms), how they’re enacted and enforced, and how different jurisdictions apply the procedural machine that is their body of law.

    To start, I will be writing from a California/USA perspective, with side-quests into general Anglo-American concepts. That said, the continental European system of civil law also provides good contrast for how similar yet different the “law” can be. Going further abroad will yield even more distinctions, but I only have so much space in a Lemmy comment.

    The first question to examine is: what is the point of having laws? Some valid (and often overlapping) answers:

    • Laws describe what is/isn’t acceptable to a society, reflecting its moral ideals
    • Laws incentivize or punish certain activities, in pursuit of public policy
    • Laws set the terms for how individuals interact with each other, whether in trade or in personal life
    • Laws establish a procedure machine, so that by turning the crank, the same answer will output consistently

    From these various intentions, we might be inclined to think that “the law” should be some sort of all-encompassing tome that necessarily specifies all aspects of human life, not unlike an ISO standard. But that is only one possible way to meet the goals of “the law”. If instead, we had a book of “principles” and those principles were the law, then applying those principles to scenarios would yield similar result. That said, exactly how a principle like “do no harm” is applied to “whether pineapple belongs on pizza” is not as clear-cut as one might want “the law” to be. Indeed, it is precisely the intersection of all these objectives for “the law” that makes it so complicated. And that’s even before we look at unwritten laws.

    The next question would be: are all laws written down? In the 21st Century, in most jurisdictions, the grand majority of new laws are recorded as written statutes. But just because it’s written down doesn’t mean it’s very specific. This is the same issue from earlier with having “principles” as law: what exactly does the USA Constitution’s First Amendment mean by “respecting an establishment of religion”, to use an example. But by not micromanaging every single detail of daily life, a document that starts with principles and is then refined by statute law, that’s going to be a lot more flexible over the centuries. For better/worse, the USA Constitution encodes mostly principles and some hard rules, but otherwise leaves a lot of details left for Congress to fill in.

    Flexibility is sometimes a benefit for a system of law, although it also opens the door for abuse. For example, I recall a case from the UK many years ago, where crown prosecutors in London had a tough time finding which laws could be used to prosecute a cyclist that injured a pedestrian. As it turned out, because of the way that vehicular laws were passed in the 20th Century, all the laws on “road injuries” basically required the use of an automobile, and so that meant there was a hole in the law, when it came to charging bicyclists. They ended up charging the cyclist with the criminal offense of “furious driving”, which dated back to an 1860s statute, which criminalized operating on the public road with “fury” (aka intense anger).

    One could say that the law was abused, because such an old statute shouldn’t be used to apply to modern-day circumstances. That said, the bicycle was invented in the 1820s or 1830s. But one could also say that having a catch-all law is important to make sure the law doesn’t have any holes.

    Returning to American law, it’s important to note that when there is non-specific law, it is up to the legislative body to fill those gaps. But for the same flexibility reasons, Congress or the state or tribal legislatures might want to confer some flexibility on how certain laws are applied. They can imbue “discretion” upon an agency (eg USA Department of Commerce) or to a court (eg Superior Court of California). At other times, they write the law so that “good judgement” must be exercised.

    As those terms are used, discretion more-or-less means having a free choice, where either is acceptable but try to keep within reasonable guidelines. Whereas “good judgement” means the guidelines are enforced and there’s much less wiggle-room for arbitraryness. And confusingly so, sometimes there’s both a component of discretion and judgment, which usually means Congress really didn’t know what else to write.

    Some examples: a District Attorney anywhere in California has discretion when it comes to filing criminal charges. They could outright choose to not prosecute person A for bank robbery, but proceed with prosecuting person B for bank robbery, even though they were working together on the same robbery. As an elected official, the DA is supposed to weigh the prospects of actually obtaining a guilty verdict, as well as whether such prosecution would be beneficial to the public or a good use of the DA office’s limited time and budget. Is it a bad look when a DA prosecutes one person but not another? Yes. Are there any guardrails? Yes: a DA cannot abuse their discretion by considering disallowed factors, such as a person’s race or other immutable characteristics. But otherwise, the DA has broad discretion, and ultimately it’s the voters that hold the DA to account.

    Another example: the USA Environmental Protection Agency’s Administrator is authorized by the federal Clean Air Act to grant a waiver of the supremacy of federal automobile emissions laws, to the state of California. That is to say, federal law on automobile emissions is normally the law of the land and no US State is allowed to write their own laws on automobile emissions. However, because of the smog crisis in the 70/80s, the feds considered that California was a special basket-case and thus needed their own specific laws that were more stringent than federal emissions laws. Thus, California would need to seek a waiver from the EPA to write these more stringent laws, because the blanket rule was “no state can write such laws”. The federal Clean Air Act explicitly says only California can have this waiver, and it must be renewed regularly by the EPA, and that California cannot dip below the federal standards. The final requirement is that the EPA Administrator shall issue the waiver if California requests it, and if they qualify for it.

    This means the EPA Administrator does not have discretion, but rather is exercising good judgement: does California’s waiver application satisfy the requirements outlined in the Clean Air Act? If so, the Administrator must issue the waiver. There is no allowance of an “i don’t wanna” reason for non-issuance of the waiver. The Administrator could only refuse if they show that California is somehow trying to do an end-run around the EPA, such as by trying to reduce the standards.

    The third question is: do laws encompass all aspects of everything?. No, laws are only what is legally enforced. There are also rules/by-laws and norms. A rule or by-law is often something enforced by something outside the legal system’s purview. For example, the penalty for violating a by-law of the homeowner’s association might be a revocation of access to the common spaces. For a DnD group, the ultimate penalty for violating a rule might be expulsion.

    Meanwhile, there are norms which are things that people generally agree on, but felt were so commonplace that breaking the norm would make everything else nonfunctional. For example, there’s a norm that one does not use all-caps lock when writing an online comment, except to represent emphasis or yelling. One could violate that norm with no real repercussions, but everyone else would dislike you for it, they might not want to engage further with you, they might not give you any benefit of the doubt, they may make adverse inferences about you IRL, or other things.

    TL;DR: there are unwritten principles that form part of the law, and there’s no way to record all the different non-law rules and social norms that might apply to any particular situation.



  • One way to make this more Pythonic – and less C or POSIX-oriented – is to use the pathlib library for all filesystem operations. For example, while you could open a file in a contextmanager, pathlib makes it really easy to read a file:

    from pathlib import Path
    ...
    
    config = Path("/some/file/here.conf").read_text()
    

    This automatically opens the file (which checks for existence), reads out the entire file as a string (rather than bytes, but there’s a method for that too), and then closes up the file. If any of those steps go awry, you get a Python exception and a backtrace explaining exactly what happened.


  • To many of life’s either-or questions, we often struggle when the answer is: yes. That is to say, two things can hold true at the same time: 1) LLMs can result in job redundancies, and 2) LLMs hallucinate results.

    But if we just stopped the analysis there, we wouldn’t have learned anything. To use this reality to terminate any additional critical thinking is, IMO, wholly inappropriate for solving modern challenges, and so we must look into the exact contours of how true these statements are.

    To wit, LLM-induced job redundancies could come from skills which have been displaced by the things LLMs can do well. For example, typists lost their jobs when businesspeople were expected to operate a typewriter on their own. And when word processing software came into existence for the personal computer, a lot of typewriter companies folded or were consolidated. In the case of LLMs, consider that people do use them to proofread letters for spelling and grammar.

    Technologically, we’ve had spell-check software for a while, but grammar was harder. In turn, an industry appeared somewhere in the late 2000s or early 2010s to develop grammar software. Imagine how the software devs at these companies (eg Grammarly) might be in a precarious situation, if an LLM can do the same work. At least with grammar checking, even the best grammar software still struggles with some of the more esoteric English sentence constructions, so if an LLM isn’t 100% perfect, that’s still acceptable. I can absolutely see the fortunes of grammar software companies suffering due to LLMs, and that means those software devs are indeed threatened by what LLMs can do.

    For the second statement, it is trivial to find examples of LLMs hallucinating, sometimes spectacularly or seemingly ironic (although an LLM would be hard-pressed to simulate the intention of irony, I would think). In some fields, such hallucinations are career-limiting moves for the user, such as if an LLM was used to advise on pharmaceutical dosage, or used to draft a bogus legal appeal and the judge is not amused. This is very much a FAFO situation, where somehow the AI/LLM companies are burdened with none of the risk and all of the upside. It’s like how autonomous driving automotive companies are somehow allowed to do public road tests of their beta-quality designs, but the liability for crashes still befalls the poor sod seated behind the wheel. Thoss companies just keep yapping about how those crashes are all “human error” and “an autonomous car is still safer”.

    But I digress.

    My point is that LLMs have quite a lot of capabilities, and people make a serious mistake when they assume its incompetence in one capacity reflects its competency in another. This is not unlike how humans assess other humans, such as how a record-setting F1 driver would probably be a very good chauffeur for a limousine company. But whereas humans have patterns that suggest they might be good (or bad) at something, LLMs are a creature unlike anything else.

    I personally am not bullish on additional LLM improvements, and think the next big push will require additional academic research, being nowhere near commercialization. But even I have to recognize that some very specific tasks are decent using today’s availabile LLMs. I just don’t think that’s good enough for me to consider using them, given their subscription costs, the possibility of becoming dependent, and being too niche.


  • Using an MSP430 microcontroller, I once wrote an assembly routine that (ab)used its SPI peripheral in order to stream a bit pattern from memory out to a GPIO pin, at full CPU clock rate, which would light up a “pixel” – or blacken it – in an analog video signal. This was for a project that superimposed an OSD onto the video feed of a dashcam, so that pertinent vehicle data would be indelibly recorded along with the video. It was for one heck of a university project car.

    To do this, I had to study the MSP430 instruction timings, which revealed that a byte could be loaded from SRAM into the SPI output register, then a counter incremented, then a comparison against a limit value in a tight loop, all within exactly 8 CPU cycles. And the SPI completes an 8-bit transfer every 8 SPI clock cycles, but the CPU and SPI blocks can use the same clock source. In this way, I can prepare a “frame buffer” of bits to write to the screen – plenty of time during the vertical blanking interval of analog video – and then blast it atop the video signal.

    I think I ended up running it at 8 MHz, which gave me sufficient pixel resolution on a 480i analog video signal. Also related was the task of creating a set of typefaces which would be legible on-screen but also be efficient to store in the MSP430’s limited SRAM and EEPROM memories. My job was basically done when someone else was able to use printf() and it actually displayed text over the video.

    This MSP430 did not have a DMA engine, and even if it did, few engines permit an N-to-1 transaction to write directly to the SPI output register. Toggling the GPIO register directly was out of the question, due to taking multiple clock cycles to toggle a single bit and load the next value. Whereas my solution was a sustained 1 bit per clock cycle at 8 MHz. All interrupts disabled too, except for the vertical and horizontal blanking intervals, which basically dictated the “thinking time” available for the CPU.


  • litchralee@sh.itjust.workstoSelfhosted@lemmy.worldPassword managers...
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    1
    ·
    edit-2
    9 days ago

    For a single password, it is indeed illogical to distribute it to others, in order to prevent it from being stolen and misused.

    That said, the concept of distributing authority amongst others is quite sound. Instead of each owner having the whole secret, they only have a portion of it, and a majority of owners need to agree in order to combine their parts and use the secret. Rather than passwords, it’s typically used for cryptographically signing off on something’s authenticity (eg software updates), where it’s known as threshold signatures:

    Imagine for a moment, instead of having 1 secret key, you have 7 secret keys, of which 4 are required to cooperate in the FROST protocol to produce a signature for a given message. You can replace these numbers with some integer t (instead of 4) out of n (instead of 7).

    This signature is valid for a single public key.

    If fewer than t participants are dishonest, the entire protocol is secure.


  • Related to moderation are the notions of procedural fairness, including 1) the idea that rules should be applied to all users equally, that 2) rules should not favor certain users or content, and 3) that there exists a process to seek redress, to list a few examples. These are laudable goals, but I posit that these can never be 100% realized on an online platform, not for small-scale Lemmy instances nor for the largest of social media platforms.

    The first idea is demonstrably incompatible with the requisite avoidance of becoming a Nazi bar. Nazis and adjoining quislings cannot be accommodated, unless the desire is to become the next Gab. Rejecting Nazis necessarily treats them different than other users, but it keeps the platform alive and healthy.

    The second idea isn’t compatible with why most people set up instances or join a social media platform. Fediverse instances exist either as an extension of a single person (self-hosting for just themselves) or to promote some subset of communities (eg a Minnesota-specific instance). Meanwhile, large platforms like Meta exist to make money from ads. Naturally, they favor anything that gets more clicks (eg click bait) than adorable cat videos that make zero revenue.

    The third idea would be feasible, except that it is a massive attack vector: unlike an in-person complaints desk, even the largest companies cannot staff – if they even wanted to – enough customer service personnel to deal with a 24/7 barrage of malicious, auto-generated campaigns that flood them with invalid complaints. Whereas such a denial-of-service attack against a real-life complaints desk would be relatively easy to manage.

    So once again, social media platforms – and each Fediverse instance is its own small platform – have to make some choices based on practicalities, their values, and their objectives. Anyone who says it should be easy has not looked into it enough.


  • Reddit has global scope, and so their moderation decisions are necessarily geared towards trying to be legally and morally acceptable in as many places as possible. Here is Mike Masnick on exactly what challenges any new social media platform faces, and even some which Lemmy et al may have to face in due course: https://www.techdirt.com/2022/11/02/hey-elon-let-me-help-you-speed-run-the-content-moderation-learning-curve/ . Note: Masnick is on the board of BlueSky, since it was his paper on Protocols, Not Platforms that inspired BlueSky. But compared to the Fediverse, BlueSky has not achieved the same level of decentralization yet, having valued scale. Every social media network chooses their tradeoffs; it’s part of the bargain.

    The good news is that the Fediverse avoids any of the problems related to trying to please advertisers. The bad news is that users still do not voluntarily go to “the Nazi bar” if they have any other equivalent option. Masnick has also written about that when dealing at scale. All Fediverse instances must still work to avoid inadvertently becoming the Nazi bar.

    But being small and avoiding scaling issues is not all roses for the Fediverse. Not scaling means fewer resources and fewer people to do moderation. Today, most instances range from individual passion projects to small collectives. The mods and admins are typically volunteers, not salaried staff. A few instances have companies backing them, but that doesn’t mean they’d commit resources as though it were crucial to business success. Thus, the challenge is to deliver the best value to users on a slim budget.

    Ideally, users will behave themselves on most days, but moderation is precisely required on the days they’re not behaving.


  • Used for AI, I agree that a faraway, loud, energy-hungry data center comes with a huge host of negatives for the locals, to the point that I’m not sure why they keep getting building approval.

    But my point is that in an eventual post-bubble puncture world where AI has its market correction, there will be at least some salvage value in a building that already has power and data connections. A loud, energy-hungry data center can be tamed to be quiet and energy-sipping based on what’s hardware it’s filled in. Remove the GPUs and add some plain servers and that’s a run-of-the-mill data center, the likes of which have been neighbors to urbanites for decades.

    I suppose I’d rehash my opinion as such: building new data centers can be wasteful, but I think changing out the workload can do a lot to reduce the impacts (aka harm reduction), making it less like reopening a landfill, and more like rededicating a warehouse. If the building is already standing, there’s no point in tearing it down without cause. Worst case, it becomes climate-controlled paper document storage, which is the least impactful use-case I can imagine.



  • Absolutely, yes. I didn’t want to elongate my comment further, but one odd benefit of the Dot Com bubble collapsing was all of the dark fibre optic cable laid in the ground. Those would later be lit up, to provide additional bandwidth or private circuits, and some even became fibre to the home, since some municipalities ended up owning the fibre network.

    In a strange twist, the company that produced a lot of this fibre optic cable and went bankrupt during the bubble pop – Corning Glass – would later become instrumental in another boom, because their glass expertise meant they knew how to produce durable smartphone screens. They are the maker of Gorilla Glass.


  • I’m not going to come running to the defense of private equity (PE) firms, but compared to so-called AI companies, the PE firms are at least building tangible things that have an ostensible alternative use. A physical data center building – even one located far away from the typical metropolitan area that have better connectivity to the world’s fibre networks – will still be an asset with some utility, when/if the AI bubble pops.

    In that scenario, the PE firm would certainly take a haircut on their investment, but they’d still get something because an already-built data center will sell for some non-zero price, with possible buyers being the conventional, non-AI companies that just happen to need some cheap rack space. Looking at the AI companies though, what assets do they have which carry some intrinsic value?

    It is often said that during the California Gold Rush, the richest people were not those which staked out the best gold mining sites, but those who sold pickaxes to miners. At least until gold fever gave way to sober realization that it was overhyped. So too would PE firms pivot to whatever comes next, selling their remaining interest from the prior hype cycle and moving to the next.

    I’ve opined before that because no one knows when the bubble will burst, it is simultaneously financially dangerous to: 1) invest into that market segment, but also 2) to exit from that market segment. And so if a PE firm has already bet most of the farm, then they might just have to follow through with it and pray for the best.


  • I presume we’re talking about superconductors; I don’t know what a supra (?) conductor would be.

    There are two questions here: 1) how much superconducting materials are required for today’s state-of-the-art quantum computers , and 2) how quantum computers would be commercialized. The first deals in material science and whether more-capable superconductors can be developed at scale, ideally for room-temperature and thus wouldn’t require liquid helium. Even a plentiful superconductor that merely requires merely liquid nitrogen would he a bit improvement.

    But the second question is probably the limiting factor, because although quantum computers are billed as the next iteration of computing, the fact of the matter is that “classical” computers will still be able to do most workloads faster than quantum computers, today and well into the future.

    The reality is that quantum computers excel at only a specific subset of computational tasks, which classically might require mass parallelism. For example, breaking encryption algorithms is one such task, but even applying Shoe’s Algorithm optimally, the speed-up is a square-root factor. That is to say, if a cryptographic algorithm would need 2^128 operations to brute-force on a classical computer, then an optimal quantum computer would only need 2^64 quantum operation. If quantum computers achieve the equivalent performance of today’s classical computers, then 2^64 is achievable, so that cryptographic algorithm is broken.

    If. And it’s kinda easy to see how to avoid this problem: use “bigger” cryptographic algorithms. So what would quantum computers be commercialized for? Quite frankly, I have no idea: until such commonly-available quantum computers are available, and there is a workload which classical computers cannot reasonably do, then there won’t be a market for quantum computers.

    If I had to guess, I imagine that graph theorists will like quantum computers, because graphs can increase in complexity really fast on classical machines, but is more tame on quantum computers. But the only commercial applications from that would be for social media (eg Facebook hires a lot of graph theorists) and surveillance (finding correlations in masses of data). Uh, those are not wide markets, although they would have deep pockets to pay for experimental quantum computers.

    So uh, not much that would benefit the average person.





  • As a practical matter, relative directions are already hard enough, where I might say that Colorado is east of California, and California is west of Colorado.

    To use +/- East would mean there’s now just a single symbol difference between relative directions. California bring -East of Folorado, and Colorado being +East of California.

    Also, we need not forget that the conventional meridian used for Earth navigation is centered on Greenwich in the UK, and is a holdover from the colonial era where Europe is put front-and-center on a map and everything else is “free real estate”. Perhaps if the New World didn’t exist, we would have right-ascension based system where Greenwich is still 0-deg East and Asia is almost 160-deg East. Why would colonialists center the maps on anywhere but themselves?