• 0 Posts
  • 313 Comments
Joined 1 year ago
cake
Cake day: January 29th, 2025

help-circle
  • His opinion is actually that AI can use his code no problem, they just have to pay a fee.

    The problem is that the big LLM AI companies will just say… ‘Fuck off’, because they don’t like paying for any data, and they also think their models will be advanced enough to write their own libraries soon (if not now, depending how much they believe their own marketing hype).

    Pricing is an additional unanswered problem in his new model. As a hypothetical: if 1000 traditional OSS users generate $1000 value in conversion to paid users in his old model - what would an AI license cost? Because one license (eg to Anthropic/Claude) would theoretically be cutting off millions of users, maybe 80%+ of his userbase. Would he ask for millions as a licensing fee?

    Whole idea is half-baked IMO, but I am sympathetic to the bullshit situation he finds himself in.




  • pulsewidth@lemmy.worldtoTechnology@lemmy.worldSpotify vs. Anna's Archive
    link
    fedilink
    English
    arrow-up
    52
    arrow-down
    1
    ·
    edit-2
    17 days ago

    Spotify streams all music at 160kbps OGG for free users by default, so that’s what this archive is dumped at - the original Spotify content, no transcode. The only difference is they re-encoded all the songs with a ‘popularity’ of zero at a lower bitrate, because that saved an enormous amount of data for all the AI crap pumped into Spotify that nobody listens to.

    Side note - it would probably not be possible to do a dump as a paid used (as they would notice a user account is being abused, and ban it), but paid accounts go up to 320kbps OGG and some content is also available lossless (as FLAC).

    Anyway, 99%+ of people can’t consistently tell the difference between a 160kbps OGG and lossless, because of limitations in either their equipment, training, ears, or a combination thereof. This has been blind tested many times and the audiophiles that ‘swear they can tell’ are always proven wrong, they then usually blame the equipment or test. There’s tests you can run yourself too, eg here: https://abx.digitalfeed.net/list.html


  • For sure. IKEA is a great place to start (or stay), as it’s a cheap ecosystem and their app/implementation doesnt require permanent internet access - functions fine during an internet outrage, and quite privacy-respecting.

    HomeAssistant is not anywhere near as hard to set up as it used to be. If you have an old mini-PC retired from work sitting around there are HA images for PCs now, and it’s pretty simple to set up to use your IKEA hub (or whatever you have already), while adding a huge swath of optional features.

    I agree it’s still not something your average Joe will set up, but the continual lowering of barriers will get more people into running a self-hosted local config is a great thing for privacy and expanding the hobby.


  • There’s an xkcd for everything, isn’t there.

    Its not wrong, but the major attraction to Matter is it must allow devices to operate locally (not tying them to cloud services that die every internet outrage, or permanently when the service retires), and it’s an application-layer protocol. Meaning it can operate over WiFi, Ethernet, or Thread.

    Many existing smart home hubs have been able to program support for Matter and simply send out an OTA update to add certified Matter support.


  • The real issue with smart home adoption has been proprietary formats all vying for dominance and fragmenting the market. I don’t think AI has changed much.

    Matter (and Thread) are a huge change to the SmartHome landscape because they’re open protocols and have well-documented standards - and they’ve finally begun appearing in big manufacturer’s line-ups such as IKEA.

    Once their availability spreads I suspect a lot more people will get into running their own local (eg HomeAssistant) smart home because they won’t have to do the ‘ok do I need z-wave or ZigBee or HomeKit or IFTTT or Hue or Tuya or… you know what, fuck this’. It’ll all be the same protocol and communications and config & debug will be much easier.






  • Yeah DuckDNS gave me many false positive outages where its resolution failed, for multiple half-days every year I used it (5yrs+).

    I moved to the afraid.org and its been solid, if anyone’s looking for another free service - only cost is you have to log in once every six months to validate your account is not dormant. They have a paid tier which gives more features (that most home users will never need), and that allows the guy running it to fund a very reliable service.



  • I was running Helldiver’s 2 for a few weeks this year on CPU alone, and wondering why my framerate sucked. I thought the devs had put out a bad update.

    Then I realized the game had forced DX12 and also decided that my graphics card was lacking a feature it required - so it fell back to CPU only. I forced DX11 in config = fixed. Me: 🤡



  • I agree it’s great at writing and frame-working parts of code and selecting libraries - it definitely has value for coding. $1500 bil value though, I doubt.

    My main concern there lies in the next gen of programmers. The work that ChatGPT (and Claude etc) outputs requires some significant programming prior-experience to allow them to make sense of the output and adjust (or correct) it to suit their scope and requirements of the project - it will be much harder for junior devs to learn that skill with LLMs doing all the groundwork - essentially the same problem in wider education now with kids/teens just using LLMs to write their homework and essays. The consequences will be long term, and significant. In addition (for coding) it’s taking away the entry-level work that junior devs would usually do and then have cleaned up for prod by senior devs - and that’s not theory, the job market for junior programmers is dying already.


  • When people say “I fucking hate AI”, 99% of the time they mean “I fucking hate AI™©®”. They don’t mean the technology behind it.

    To add to your good points, I’m a CS grad that studied neural networks and machine learning years back, and every time I read some idiot claiming something like “this scientific breakthrough has got scientists wondering if we’re on the cusp of creating a new species of superintelligence” or “90% of jobs will be obsolete in five years” it annoys me because its not real, and it’s always someone selling something. Today’s AI is the same tech they’ve been working on for 30+ years and incrementally building upon, but as Moore’s Law has marched on we now have storage pools and computing power to run very advanced models and networks. There is no magic breakthrough, just hype.

    The recent advancements are all driven by the $1500 billion spent on grabbing as many resources they could - all because some idiots convinced them it’s the next gold rush. What has that $1500 bil got us? Machines that can answer general questions correctly around 40% of the time, plagiarize art for memes, create shallow corporate content that nobody wants, and write some half-decent code cobbled together from StackOverflow and public GitHub repos.

    What a fucking waste of resources.

    What’s real is the social impacts, the educational impacts, the environmental impacts, the effect on artists and others who have had their work stolen for training, the useability of the Internet (search is fucked now), and what will be very real soon is the global recession/depression it causes as businesses realize more and more that it’s not worth the cost to implement or maintain (in all but very few scenarios).


  • I know you’re meming, but in Civilization (as in most games), you’re playing against predefined scripts and algorithmic rules that the computer opponent has, as well as having cheaper costs for resources than the user at higher difficulty levels - because it cannot compete with a skilled human player at that level (it literally cheats).

    No LLM, no neural network, no deep learning… not ‘AI’ in the modern sense that’s being discussed here.