

I use this for it’s ability to simulate brightness levels lower than what the system will allow normally.


I use this for it’s ability to simulate brightness levels lower than what the system will allow normally.


It’s the HDMI forum’s fault, they won’t allow amd to add HDMI 2.1 support to their Linux driver. It’s very possible that the hardware itself does support 2.1.
More detailed explanation here: https://www.phoronix.com/news/HDMI-2.1-OSS-Rejected


This is fairly harmless compared to the government trolls. Pretty much every Canada related subreddit has been dealing with a slurry of xenophobic posts from brand new accounts with hidden post history. The better subreddits quickly ban or lock them but several let them go as they reinforce the moderator’s own worldview. Not even the Costco Canada subreddit is safe.


It has to be tailored to the specific hardware so I don’t think it’s a major concern for most users. It doesn’t seem like something that can be fully mitigated either, so it’s probably not worth worrying about. Side channel attacks are really cool but also kind of useless in most practical scenarios.
The only advantage of teams is that it’s bundled with other Microsoft software. It’s worse than slack in every way. It’s a textbook example of a monopoly.


To be fair anything short of selling chrome or breaking up the company would have a positive reaction. The possibility of losing chrome was priced in.


Perplexity (an “AI search engine” company with 500 million in funding) can’t bypass cloudflare’s anti-bot checks. For each search Perplexity scrapes the top results and summarizes them for the user. Cloudflare intentionally blocks perplexity’s scrapers because they ignore robots.txt and mimic real users to get around cloudflare’s blocking features. Perplexity argues that their scraping is acceptable because it’s user initiated.
Personally I think cloudflare is in the right here. The scraped sites get 0 revenue from Perplexity searches (unless the user decides to go through the sources section and click the links) and Perplexity’s scraping is unnecessarily traffic intensive since they don’t cache the scraped data.
pip cache is another common culprit, I’ve seen up to 50GB


It also sets context length to 2k by default iirc, which breaks a lot of tasks, and gives a general bad first impression to users who are likely using local models for the first time.
They’ve been saying this for the last 2 years


They get a new feature to boast about


“Free market” fans when free market


The speed of many machine learning models is bound by the speed of the memory they’re loaded on so that’s probably the biggest one.


Rather than CPUs I think these are a much bigger deal for GPUs where memory is much more expensive. I can get 128GB of ram for 300CAD, the same amount in vram would be several grand.
You’re licking the boots of a company that uses the work of others without compensation or credit then sells it back to you at a premium. This is the exact behavior the GPL license aimed to prevent. I have nothing against the technology if it’s made with permission and benefits the people it depends on, but that’s clearly not the case here.
Ludites are an apt comparison. The Luddites fought to protect their industry from industrialists who aimed to replace them with cheap, low skilled and child labour. The goal of AI isn’t advancement it’s replacement, and most of the companies pushing it are transparent about that.
You can’t larp about using opensource software while creating memes using closed source garbage.


Seems pretty underwhelming. They’re comparing a 109B to a 27B and it’s kind of close. I know it’s only 17B active but that’s irrelevant for local users who are more likely going to be filtered by memory rather than speed.


They’ll sell each of them off to be run into the ground by some other billionaires. Both are heavily subsidized by Google’s ad business which is still somewhat unobtrusive up front. As much as Google’s services have degraded, it will be much worse with another company at the helm trying to squeeze as much value out of their investment as possible.
This will be the subprime mortgage crisis of the 2020s.
They’ve been doing it since the start. OAI was fear mongering about how dangerous gpt2 was initially as an excuse to avoid releasing the weights, while simultaneously working on much larger models with the intent to commercialize. The whole “our model is so good even we’re scared of it” shtick has always been marketing or an excuse to keep secrets.
Even now they continue to use this tactic while actively suppressing their own research showing real social, environmental and economic harms.