• 1 Post
  • 311 Comments
Joined 2 years ago
cake
Cake day: June 12th, 2023

help-circle

  • Time to DIY!

    Waveshare touchscreen for pi, 1200x800 is a good price and for home assistant that is fine. $70/75 for 8inch/10.1inch version. (10.1DP-CAPLCD)

    Raspberry pi 3/4/5 can mount directly on the back of it. For whatever outrageous price Pis are now. (Around here, a 4B/4GB is 60€.

    Wave share PoE hat for $20

    Assemble it like Lego, put it in a wooden frame or 3D print, done. Around 160 USD plus shipping for a full build of a POE battery-less touchscreen display that runs full Linux of whatever flavor. (And is quite overkill as far as power).

    You could probably do it even cheaper with an orange pi zero 3 with a PoE to USB-C converter or a Banana Pi BPI-P2 Pro IoT which has PoE built in.

    It is cheaper than a tablet and strips out the useless things like a battery, camera, really high DPI display, LTE radios, etc… For a simple home assistant kiosk.

    But yeah, epaper displays are 3x the display cost without touchscreen. Though in my opinion, epaper is better for static non-interactive sensor display which can run on battery with an MCU for almost no power because it only has to update once an hour or so.



  • What other people haven’t quite touched on is that the in-built system certainly won’t be powerful enough to run demanding VR games with good frame rates and resolution.

    I also have my doubts about the 6GHz WiFi connection being enough for it, I hope there is also a wired option.

    But it will be awesome to be able to do normal tasks like coding, writing, etc… outside in the garden, as an example. I think for people that don’t have a dedicated VR space, this could be awesome with 6GHz WiFi outside without needing base stations.





  • Hey, something I can maybe help with.

    Flatpak IDEs on the main system are not very useful for development. I got rid of mine entirely. I am developing firmware so it might be a bit different from your case, but what I did in have a single arch distrobox where I could install everything embedded-dev-related that had to work together (JLink, nordic tools, code-oss, etc…) on that. Then a few standalone debugging tools like STLink and Saelae logic2 could be installed to the home folder by default and Code could still find them from the distrobox (but they could be installed in the distrobox also). It doesn’t even need to have an init system, but I ran into a few problems like having to manually chmod usb devices to give STLink access. Udev rules are also hit or miss in /etc/udev/rules.d, e.g. the STM udev rules just don’t work, but nordic does.

    High storage consumption is likely negligible (or at least nitpicky) since storage is so cheap nowadays. Your SSD doesn’t care if it has 15GB or 20GB of system programs, especially when development codebases and SDKs, games, and media will likely make up 90% of space and almost never share libraries even on traditional systems.


  • But actual results and bugs have very little to do with corporate firings or open positions, as 30 years of history show us.

    If corporations “think” they can fire people, with AI as an excuse, and put that cost in their pockets, they will do it. We are already seeing it in the US tech-bro sphere.

    Companies will tank themselves in the medium-long term to make short term profits. Which I think is the “dev market” that OP is talking about. It shouldn’t affect the market, but it will because you have MBAs making technical decisions. I could be wrong, but the tech market is very predictable as far as behavior. They will hire a skeleton crew and work them to burnout to fix the AI slop. (Tech industry needs unions now)


  • It is funny because electric motors have nearly unlimited* torque depending on the kind. If you have thick enough power cables and winding conductors, you can just keep pushing it harder to get more torque.

    It is like the thing they are very good at, besides sound levels, double or triple the efficiency, low/no maintenance, simpler with less parts, no emissions, etc…

    Literally the only good thing about combustion engines are their fuel source energy density.

    I think the problem is that motorheads see the enshittification of the auto industry as a whole and just say it’s because of electric motors because it happened right about the same time as EVs started coming out and try to push back on the wrong thing.


  • That only solves maybe one of the listen problems. Whatever instance you have, you still have to get and serve media to other viewers and instances. The only problem that this solves is potentially CSAM spam/moderation.

    Let’s say it was a cell phone, it could handle maybe 2 concurrent transcoding streams before stalling out and people running into buffer times (which makes them leave).

    If every person had their own tiny, low powered servers, then you could have max like 5 concurrent transcodes on any instance in all of peertube for old laptop or desktop computers. Assuming an average of people have a 100/30Mbps connection (which is true in much of the world outside of major cities, or even lower), then that would be absolutely maxing out at 10 concurrent viewers if everyone is running AV1 compatible clients (which is not the case) and more like 6 concurrent viewers per video at h.264. Those estimates are at low bitrates also, so low quality, absolutely no slowdown from your ISP, and absolutely no other general home or work-from-home use. In reality it would be closer to 3-6 concurrent viewers per instance (not even per video)

    Still not even counting storage which is massive for anyone that creates more than a couple videos per year.

    My point is just that it is an extremely difficult and costly problem that is not as simple as “more federation” like in text and image-based social media because of the nature of video, the internet, and viral video culture. Remember, federation replicates all viewed and subscribed content on the instance (so the home instance has to serve the data and both instances have to store it)




  • Just a few thoughts as to why it hasn’t taken off:

    Video is multiple orders of magnitude more difficult and expensive to serve than text or even audio.

    • Your server needs a great upload speed which is not achievable for on-site home servers for most people in the world

    • Your server has to have at least one dedicated encoding GPU (no raspberry pis or Intel nucs if you want any meaningful traffic)

    • Your server has to have a ton of storage, especially if you allow 4k content to be uploaded, which while much cheaper than before, is still expensive. Here in the EU, reliable storage is around 300€/12TB for drives, which fills up very fast with 4k videos or if you try to store different resolutions to reduce transcoded loads.

    • Letting random people upload video onto your instance is significantly harder to moderate than text or photos. Like think of the CSAM spam that was on Lemmy when it started in taking many new users…

    • The power usage (and bill) of the server will also be much higher than without peertube because of constant transcoding

    The cost, both financial and server taxation-wise is simply too great for me, and many others to setup a peertube instance.

    Regardless of how easy it is for people to create on peertube, someone has to bear the cost of hosting it. That is cheap-ish for Lemmy or mastodon, but there is a reason YouTube was a loss leader for a long time for google, and many streaming services restrict 4k video.

    That isn’t even getting into compensation for the content makers.



  • Yes, but I am also of the opinion that not one single acronym should be used without at least once in the section saying what the acronym is. Many many programing docs with say what am acronym is exactly once, somewhere in the docs, and then never again.

    Also, if there are more complex concepts that they use that they don’t explain, a link to a good explanation of what it is (so one doesn’t have to sift through mountains of crap to find out what the hell it does). Archwiki docs do this very well. Every page is literally full of links so you can almost always brush up on the concepts if you are unfamiliar.

    There seem to be 10 extremely low quality, badly written, low effort docs for every 1 good documentation center out there. It is hard to RTFM when the manual skips 90% of the library and gives an auto-generated api reference with no or cryptic explanations on parameters, for example.


  • But on this threat model? Why would it not be good?

    It has to physically accessed on the PCB itself from what I gather.

    There are 2 “threats” from what I see:

    • someone at the distribution facility pops it open and has the know how to install malware on it (very very unlikely)

    • someone breaks into your home unnoticed and has the time to carefully take apart your vacuum and upload pre-prepared malware instead of just sticking an IP camera somewhere. If this actually happens, the owner has much much bigger problems and the vacuum is the least of their worries.

    The homeowner is the other person that can access it and it is a big feature in that case.