• 0 Posts
  • 17 Comments
Joined 2 years ago
cake
Cake day: June 22nd, 2023

help-circle
  • This doesn’t make any more sense than Windows phones made. They required way too many hardware resources and power to run a system that is designed to do a ton of things on a ton of different types of hardware. Handheld hardware needs specialized OS optimized for the platform and I doubt this will do that. It will likely have a ton of RAM and processing tied up in OS activities just like windows phones making everything slow and/or battery life really bad, but still not be able to run a lot of the stuff that would make this all worth it. Better to start with a more modular system like the base linux kernel and add only what is necessary than to start with the idea of supporting a ton of software and sacrificing the real purpose of the device (handheld gaming) to do it.



  • Two ways to process voice, on device or on server. Device-based solutions either are very basic and just detect differences between words or need training data based on your voice or they need lots of processing power for more generalized voice recognition. So is your battery draining and phone is often hot because an app is keeping the mic on and keeping the phone from slowing the processor? Other option is to stream the data to the server. This would also increase battery usage as the phone can’t sleep, but might not be as noticeable, but more evident would be your phone using a lot more bandwidth than is reasonable while you aren’t actively using it.





  • One of the primary requirements for my latest project moving a bunch of stuff to self hosted is that if it has a GUI that is going to be internet facing, it either has to support OIDC or it has to be something low risk enough that I feel comfortable setting it up without much security and just setting up a single basic auth login with traefik. A few apps I had trouble finding, but worked most of it out.


  • It’s just how HR does stuff in the US. Most applications have to go through an automated system for filtering before reaching a person, unless it’s a pretty small company. That system usually requires very specific criteria to get through. Like I remember applying for a seasonal job at Target, around the end of 2010 when I was laid of, and having to fill out a really detailed application online and take a bunch of personality tests. Turns out I scored too high on leadership and had too much professional experience to be a stock person/cashier, so I was rejected before it was sent to the store manager.

    It’s not an accident or unintended consequence kind of thing either. It’s how they can have a job position “open” and have hundreds of applications, but still be understaffed and thus force workers to work what should be extra people’s jobs for no extra pay. It’s just how the mega-corp culture is in the US for the most part.

    As for the software and some other very technical industries, it’s a similar cultural thing, but on top of that, most recruiters are not technically literate and so don’t know how to judge a technical person, but are made to filter applications before passing then on. My last job had a position open the entire 10 years I worked there and there were no interviews at the hiring manager or team level in all that time. It was an analyst position and I would have hired basically anyone who had the one bit of specialized knowledge if it was up to me. But I did the job of two people the whole 10 years and was never able to move up I the company because of it.

    Only reason I didn’t leave sooner was that I didn’t have the funds to get a degree when I was younger and fell into a time when the crazy unsecured loans were not as much of a thing, and most companies filter out software related candidates without a degree up front, regardless of experience. Finally got a degree when I found a program that I could handle while also doing two peoples’ worth of work.


  • If it’s just one job post, then automating it is not going to be very useful. I don’t think OP meant that. Seemed like they want to give a general CV/resume and then feed it each job posting and get customized versions for each posting. Many HR departments have keyword filters necessary to clear before it gets to a person. Otherwise, it takes only a few minutes to customize one time and would be much better to do manually anyway.

    Problem is, these days it usually takes 50-100 job applications per interview depending on industry. In the software industry (in the US anyway), that’s about average. Last job took me about 500 applications and that led to 3 third-round interviews and 2 of them gave offers. Total I probably had around 8-10 first round interviews, not including the many 5-10 minute phone calls with headhunter recruiters that contacted me based just on my resume on LinkedIn and various other sites.



  • Not exactly. I just think trying to apply a single threaded, cyclical processing model on a process that is neither threaded nor executed in measurable cycles is nonsensical. On a very, very abstract level it’s similar to taking the concept of dividing a pie between a group of people. If you think in terms of the object that you give to each person needing to be something recognizable as pie, then maybe a 9-inch pie can be divided 20 or 30 times. Bit if you stop thinking about the pie, and start looking at what the pie is made up of, you can divide it so many times that it’s unthinkable. I mean, sure there’s a limit. At some point there’s got to be some three dimensional particle of matter that can no longer be divided, but it just doesn’t make sense to use the same scale or call it the same thing.

    Anyway, I’m not upset about it. It’s just dumb. And thinking about it is valuable because companies are constantly trying to assign a monetary value to a human brain so they can decide when they can replace it with a computer. But we offer much different value, true creativity and randomness, pattern recognition, and true multitasking, versus fast remixing of predefined blocks of information and raw, linear calculation speed. There can be no fair comparison between a brain and a computer and there are different uses for both. And the “intelligence” in modern “AI” is not he same as in human intelligence. And likely will never be with digital computers.


  • Regardless of how you define a “bit”, saying 10 in a second when most people easily process hundreds of pieces of information in every perceivable moment, much less every second, is still ridiculous. I was only using characters because that was one of the ridiculous things the article mentioned.

    Heck just writing this message I’m processing the words I’m writing, listening to and retaining bits of information in what’s on the TV. Being annoyed at the fact that I have the flu and my nose, ears, throat, and several other parts are achy in addition to the headache. Noticing the discomfort of the way my butt is sitting on the couch, but not wanting to move because my wife is also sick and lying in my lap. Keeping myself from shaking my foot, because it is calming, but will annoy said wife. Etc. All of that data is being processed and reevaluated consciously in every moment, all at once. And that’s not including the more subconscious stuff that I could pay attention to if I wanted to, like breathing.


  • I just skimmed it, but it’s starting with a totally nonsensical basis for calculation. For example,

    “In fact, the entropy of English is only ∼ 1 bit per character.”

    Um, so each character is just 0 or 1 meaning there are only two characters in the English language? You can’t reduce it like that.

    I mean just the headline is nonsensical. 10 bits per second? I mean a second is a really long time. So even if their hypothesis that a single character is a bit we can only consider 10 unique characters in a second? I can read a whole sentence with more than ten words, much less characters, in a second while also retaining what music I was listening to, what color the page was, how hot it was in the room, how itchy my clothes were, and how thirsty I was during that second if I pay attention to all of those things.

    This is all nonsense.


  • No it’s the cost. Reprocessing wouldn’t create weapons grade materials in most cases. Not anymore than the enrichment for the existing reactors anyway. Problem is that it requires expensive equipment, lots of security, and doesn’t produce nearly as much energy as the existing reactors, at least not in the short term, and companies (especially publicly traded ones) only really have incentive to care about short term profit.

    Then you have the problem of limited supply in a given area, and if you need to get it from all over the world, the transportation is definitely a security issue and major expense. And once you reprocess all of the existing waste, it takes time for more to be produced. Then you aren’t making profit.

    It’s just not a profitable undertaking, so it will never happen. The general conceptual technology has existed at least as long as nuclear reactors. But hasn’t been developed at all. That’s the reality and will remain the reality. Especially considering that other, truly renewable energy sources are cheaper to build, and don’t require as much security and maintenance to produce as much energy.

    The biggest thing that would solve a lot of problems in renewables would be investing in battery and other efficient energy storage. But the fossil companies own most of that tech now, have traditionally shelved it after buying it, and with the current political atmosphere, are being incentivized to more aggressively dig for more fossil fuels rather than plan for the future. Especially in the US with the next administration planning to increase oil and coal production and eliminate the environmental restrictions that make it more expensive to dig up, process, and use what little remains.


  • The waste. There are currently no operational longterm storage facilities much less permanent ones. It’s too expensive, so companies just go bankrupt or governments like the US just stop funding them and the waste sits in pools waiting for a natural disaster, terrorist, or war to damage them and poison the soil and water tables for generations. The Pacific Ocean already got a taste with Fukushima, but it’s enormous and could absorb it…mostly, but what if a tornado hit a facility in the landlocked Midwest US?



  • It’s good to use SSL even if you don’t plan to use it externally. At some point you may change your mind, or you may need to access it via VPN and there may be one hop between your browser and the VPN that will then be in plain text. Plus, not all devices are trustworthy anymore. An Android or iPhone device might have “malware” (including from reputable companies like Google trying to track you for ad purposes but recording unsecured http traffic to do it.) Or a frienday bring a bad device over and connect to your wifi and inadvertently capture that traffic. Lots of ways for internal traffic to be spied on.

    Google: “how to create self signed certificate authority on <your workstation OS>”

    And if that article doesn’t have it, google: “how to create a domain certificate from a self signed certificate authority”.

    It doesn’t have to be a valid external domain, just use “.internal” as the top level domain which is reserved for this kind of thing, like “vaultwarden.internal”. You can also just use IP addresses in the certificate, but I find that less desirable.

    Then google: "how to add a trusted certificate authority on <all your OS’s of all internal devices>”. Depending on what web browser you use, you may need to add it there as well. Once the certificate authority is trusted by your devices and browsers, then the domain certificate created by that CA will be as well.

    You can set your expiration dates to be far in the future if you want, to avoid having to create new ones often, but be sure to document how just so in 5 or 10 years or so, if it’s still that way, you’ll know how to update them.