• 0 Posts
  • 75 Comments
Joined 2 years ago
cake
Cake day: June 16th, 2023

help-circle

  • You can’t directly convert the app to make it natively android; android is too different for that. The app is built to use the whole android OS, not just the kernel (which is forked from linux). That means the android app is designed to run on mobile processors (usually ARM), and will be making calls to the android OS for everything.

    You can’t repackage it directly as a linux app. However there are emulators and translation layers that cannbebused to run android apps within linux.

    Waydroid for example allows android apps to run using android containers in linux. Anbox is also a container approach to running android apps. Both these approaches essentially translate for the android apps, and reduce the overhead asnthey dont have to emulate everything and can directly pass instruction to the linux host system. You can also use full virtualization to emulate an android device and run a whole virtual device. This would have a bit more overhead though.

    I’m not aware of tools that can be used to compile android apps from source in to linux apps. It could be done in theory but would be complex due to the degree of translation of android APIs needed. Again compiling into some kind of container approach (I. E. Compile to include anbox or waydroid) might be doable but would bloat the app. I dont think there is the demand for that kink of approach when building in containers into Linux (and Windows) allows direct reuse of the android apks.


  • If youre new to linux, then I’d say Linux Mint is the place to start. Use it with XFCE if light weight is what you want.

    Not having cutting edge packages is a red herring - you really dont want bleeding edge as thats where the errors and breakages happen. Mint is reliable and secure which is what you need when starting out. You dont want to be a beta tester. Dont confuse latest packages for most secure on linux - plenty of packages have stable older versions which get security patches.

    Mint is also very popular, with a huge range of easy to find resources to help set it up the way you want it.

    Wayland is also a red herring - its the future but its just not really ready yet. Yes its more secure due to how its built but the scenario you’re using linux in the particular security benefits you’re hearing about are not really going to impact you day to day. And the trade off is that Wayland is still buggy, with many apps still not working seamlessly. Most apps are designed for X11 and x-wayland is an imperfect bridge between the two. I’m not saying Wayland is bad - it’s actually good and is the future. But you dont want to be problem solving Wayland issues as a linux newbie. Dont see Wayland as essentialnfor an good stable and secure linux install.

    Personally I wouldn’t recommend Fedora - it has a short update cycle and tends to favour newer bleeding edge tech and paclages. Thats not a bad thing but if what you want is a stable, reliable low footprint system and to learn the basics, in wouldn’t stray into Fedora just yet. It has a 13 month cycle of complete distro upgrades and distro upgrades are the times when there are big package changes and the biggest chances of something breaking. The previous version loses support after a month so you do need to upgrade to stay secure. Most people won’t have issues between upgrades but with any distro when you do a big upgrade things can easily break of you’ve customised things and set up things differently to the base. It can be annoying having to fix thongs and get them back how you want them, and worse can lead to reinstalls. Thats nor a uniquely Fedora problem, but the risk is higher woth faster updating and bleeding edge distros. And in fairness there are lots of fedora spins that might mitigate that - but then you risk being on more niche setups so support can be harder to find when you need it.

    For comparison the latest version of Mint supported through til 2029, and major releases also get security patches and support for years even after newer versions are released. There is much less pressure to upgrade.


  • I work in a hospital and the worst days to work are weekends. The hospital is still full of patients but most staff are off so its busy. And its much harder dealing with sick patients and emergencies on a weekend as a result. Also all your friends and family are off on the weekend so you can’t see them.

    Meanwhile if you have days off in the week, it’s great because everything is open (unlike a sunday) and all the kids are in school. So you can go out an enjoy the parks, or venuesnlike gyms or shop freely etc. But most of your friends and family are also at work so that limits things.

    I would definitely take 2 days off together, not split them. If I were to have 2 days off and work every weekend I’d either take Mon/Tue off or Thu/Fri. I think its just preference and howbbusy your job is. It could suck being in work on a Friday while everyone else is gearing up for weekend off and discussing their plans, plus also people head off early where they can - I’d probably take Thu/Fri off so I didn’t have to put up with all that.

    I personally work 80% of full time and do 3 long days plus oncall. It works out 3x 10 hour days and 2 hours pay per week is for my weekend oncall work every 16 weeks. I end up with 4 days off every week and its glorious. So aiming for a 5 day week may be a mistake. When I was 100% full time I did 4 long days for a bit - it was OK but I had Tue off, worked the other days and had the rhythm of weekend off then on/off/on - it didn’t feel like i was really off for 3 days a week. I’d definitely recommend always stick off days together.

    But it may be longer daysnis the real best option if available. Even working 100% hours you have 1 less day commuting on 4 days, and if you work 10 hours so you start early and finish late you can even miss rush hour. I used to stay late or come in early to miss traffic when i was doing normal 9-5 work so switching to 10 hour 8-6 was easy. Depends what your role is and your own stamina for long days is though.



  • BananaTrifleViolin@lemmy.worldtoLinux@programming.devBest rootless remote X solution?
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    1
    ·
    edit-2
    14 days ago

    The reality is what youre asking for is very complex - you’re asking for lagless streaming for a desktop. That is running a GUI on remote hardware, and then streaming that video to another computer with low latency so you have no perception of lag in moving the mouseor interaction, and continuous streaming of desktop updates.

    There are lots of factors at play that can make it a poor experience.

    You can have what you want if:

    • The server you SSH in to has the resources to run X well
    • The server you SSH in to has the hardware to be able to then convert that to video (with some tricks) and stream it
    • The internet connection between you and the remote server is stable and high enough bandwidth to stream the desktop
    • the internet connection between you and the remote desktop is low latency.

    Its very hard to achieve all those things even when youre creating machines that are dedicated for remote desktop streaming. I have done that in my work with Windows devices and to get good quality streaming we needed dedicated hardware, dedicated software and high quality internet. And even then some of our users had bad experiences.

    Most remote servers are definitely not set up to provide what you want. Dedicated software for the task will help as there are lits of tricks that they apply to make a streaming desktop appear latency free versus simpler solutions that just stream the actual desktop.

    VNC is not a good solution - its basically just taking screenshots and streaming those to you. It works with fast devices on a local network, but is very limited in your use.

    If you really want to solve this look at software optimised for low latency uses such as gaming. For example Moonlight/Sunshine are for game streaming but work with desktops. They are designed to be low latency high quality. But to achieve that you need the video hardware on your server, and the good low latency stable internet connection.

    Real world high quality desktop streaming also needs good graphics hardware and optimised tools. It can be achieved with open source software but you need the hardware to to do the heavy lifting.


  • If the EU were concerned about the US jurisdiction of Linux projects it could pick:

    • OpenSuSE (org based in Germany)
    • Mint (org based in Ireland)
    • Manjaro (org based in France/Germany, and based of Arch)
    • Ubuntu (org based in UK)

    However if they didn’t care, then they could just use Fedora or other US based distros.

    I think it would be a good idea for the EU to adopt linux officially, and maybe even have it’s own distro, but I’m not sure this Fedora base makes sense. Ironically this may also be breaching EU trademarks as it’s masquerading as an official project by calling itself EU OS.




  • I use Jellyfin as a home media server - in my set up I have it running on my desktop PC, and I use it to stream a media library to my tv.

    A home media server basically just means its meant to be deployed at a small scale rather than as a platform for 1000s of people to use.

    Your scenario is exactly what Jellyfin and Plex can do. If you have 5 users then you just need a host device running the server that is powerful enough to run 5 video streams at the same time. The server can transcode (where the server takes on the heavy lifting needing a more powerful CPU) or direct play (where all the server does is send the bits of the file and the end user’s device such as a phone or smart tv does the hard work of making a quality play, so a lower power server device can work).

    If this is contained within your home, your home wifi or network should be fine to do this, even up to 4k if your network is good enough quality. If the 5 people are outside your home then your internet bandwidth - particularly your upload bandwidth - and your and their internet quality will be important determinant of quality of experience. It will also need more configuring but it is doable.

    This doesn’t need to be expensive. A raspberry pi with storage attached would be able to run Jellyfin or Plex, and would offer a decent experience over a home network if you direct play (I.e. just serve up the files for the end users device to play). You might need something more powerful for 5 simultaneous direct play streams but it would still be in the realms of low powered cheap ARM devices.

    If you want to use transcoding and hardware acceleration you’d need better hardware for 5 people to stream simultaneously. For example an intel or amd cpu, and ideally even something with a discrete graphics card. That doesn’t mean a full desktop PC - it could be an old PC or a minipc.

    However most end user devices such as TVs, PCs, Phones and tablets are perfectly capable of direct playing 1080p video themselves without the server transcoding. Transcoding has lots of uses - you can change the audio or video format on the fly, or enable streaming of 4k video from a powerful device to a less powerful device - but its not essential.

    Direct play is fine for most uses. The only limitation is the files on the server need to be in a format that can be played on the users device. So you may need to stick to mainstream codecs and containers; things like mp4 files and h.264/avc. You could get issues with users not being able to playback files if you have say mkv files and h. 265/hevc or vp9. Then you’d either need to install the codecs in the users device (which may not be possible in a smart tv for example) or use transcoding (so the server converts the format on the fly to something the users device can use but then needing a more powerful server)

    I prefer Jellyfin as its free and open source. It has free apps for the end user for many devices including smart tvs, streaming sticks, phones, tablets and PCs. Its slightly less user friendly than plex to set up but not much. And the big benefit is your users are only exposed to what you have in your library.

    Plex is slightly more user friendly but commerical. You have to pay for a licence to get the best features and even then it pushes advertising and tries to get your users to buy commercial content. Jellyfin does not do that at all.

    Finally if your plan is to self host in the cloud, again this is doable but then you stray into needing to pay for a powerful enough remote computer/server, the bandwidth for all content to be served up (in addition to your existing home internet) and the potential risk of issues with privacy and even copyright infringement issues around the content you are serving. A self hosted device in your home is much more secure and private. A cloud hosted solution can be secure but youre always at risk of the host company snooping your data or having to enforce copyright laws.

    Edit: the other thing to consider ia an FTP server. If you just want to share the files, its very simple to set up. What Jellyfin and Plex offer is convenience by having a nice library to organise things, and serving up the media. But direct play from a media server is not far off just downloading the file from an ftp server to your home device and playing it. But you can also download files from a Jellyfin server so I’d say its worth going the extra step and to use a dedicated media server over ftp.




  • In fairness to the register they also ridicule moving to a dedicatdd ERP in the same article.

    You’re r absolutely right there is nothing wrong with Excel. Its powerful software and ultimately it cones down to human and organisational processes about whether its being used to its best or not. You can also have the most expensive top end dedicated ERP in the world and still be a total mess. Similarly business used to run on pen and paper and could be highly efficient.

    Software is just a tool, and organisation go wrong when they think it alone is the solution to their problems.

    Also I doubt Health NZ overspend has anything whatsoever to do with excel. Instead it’ll be due to rising demand, and inflationary pressures on public finances. We have the exact problems here in the UK with the NHS just scaled up to a £182bn.


  • Whats misinformation about it? To say “this is misinformation” and not explain why can be a form of misinformation in itself.

    The article does say it previously called this a “backdoor” and has been changed. Otherwise it seems to be fairly factual although the person it quotes continues to use the term “backdoor”.

    To say its a backdoor does infer this is deliberate or some motivation to concealing the prescence of these commands - there is no evidence for this whatsoever and there is no evidence there is malign intent. Most chips likely have undocumented commands used by the chipmakers.

    However it is fair to say this is a potential security risk if these commands are not locked down in production and could be used as an attack vector. Even if they could be used to scrape information that would be concerning. But we’d need to know more detail.

    If its been covered better elsewhere please share it as that is a netter counter to misinformation than just saying misinformation.


  • I agree the new graphics cards market is a mess, but thats because they’re being bought for AI and crypto which is inflating prices.

    However its not killing PC gaming. PC gaming is the single biggest platform, and in reality for most gamers the latest generation of cards are way over powered for most games. In a way we’ve never seen before.

    The AAA part of the games industry is in crisis, because we’re now well beyond the point of diminishing returns for graphics. Graphics quality is already very high and each innovation is now having minimal impact on game quality. Its hard to innovate when you’re near photo realism - you just get closer and closer. The last generation of cards are already above what is needed to run new games well. And people arent buying $70+ games when all they offer is graphics boosts and crap gameplay - thats breaking the business model of the big publishers who lazily relied on graphics to drive sales.

    The PC market has fragmented into lots of smaller indie games. Its big enough overall that a small game targeted at fans of specific genres can be high quality and successful enough. There are so many games being released all the time now that the market may even be saturated - its an amazing time for player choice.

    Personally I’m in no rush to upgrade my 3070. Probably won’t until something like Witcher 4 comes out and even then maybe I won’t need to go to the highest end. Witcher 3 is 10 years old and still looks spectacular to me, Cyberpunk 2077 is 5 years old and I play it on my 3070 card and a separate PC with an integrated gpu. I barely notice the graphic quality difference between the 2 devices even though the 3070 is running at the highest settings while the iGPU PC is running at mid settings.




  • One thing Phil Spencer does not seem to care about is emulation. There are already Xbox and PlayStation emulators that allow access to more of both platforms back catalogues than any of the current generation consoles are capable of…

    Xbox could build cloud based emulators off the open source tools already available and make their entire Xbox back catalogue accessible to current users to stream. They could help improve the tools to ensure greater and greater compatibility for titles and then it would be there forever.

    The reason it doesn’t happen is money. They dont see money in game preservation so they dont bother beyond a few big name nostalgia hits. Muse AI isn’t about game preservation, its about game development - they’re just pissing around with game preservation to feed it content as a punt on the future for it somehow making game development cheaper.


  • AI can make Shakespeare BETTER! Like it can put it in modern text speak, and shorten it down to fit in a 30 second tiktok, plus give space for ad breaks and temu product placement. AI will help enhance user engagement with Shakespeare and also leverage new monetisation options and cross platform synergies.

    All we have to do is let people copyright AI made content because ultimately it wasn’t Shakespeare that did the hard work, it was the AI tech Bros who transformed it into a modern content meme and raised 3rd quarter profits for everyone!


  • Yeah, instead game preservation is being solved by abandonware and copyright infringement.

    Legal open source software is doing the heavy lifting, and then torrenting is sharing by he files. But there is a huge risk as there is no safety net to preserve the niche and unpopular games.

    The game publishers and broken copyright laws are blocks to preservation but fortunately people are just doing it anyway. And the more the big companies push against it (including targeting emulation systems for current systems) the more they push it underground and out of any control they might have had. Typical greed and stupidity.