• 1 Post
  • 554 Comments
Joined 3 years ago
cake
Cake day: June 15th, 2023

help-circle


  • Sure, it can happen. The anecdote sounds ludicrous to me: gatekeeping someone with that much experience over checking a box like that.

    This is surprisingly common in many industries. It was one of the reasons I went back and got a degree as a working adult. It worked and I was able to land jobs that had that requirement which was a springboard into higher earning work. It was so strange the first time it happened. I got a call from a old coworker I hadn’t seen or heard from in about 12 years. He was a boss then looking to hire for a lucrative position. We talked for a bit to catch up, he said I had the skills he wanted then almost as an afterthought he said “Oh, uh, do you have a Bachelors degree?” and I said, for the first time in an employment situation “yes”. His response was “okay, sounds good. Show up on Monday, you’ve got the job”. That was it. Without being able to say “yes” there I would not have gotten that job. In the years since, received that same question and gave the same answer in a number of jobs after than each with increasing salary and benefits.

    Also, no one asks when you got the degree. Everyone always assumes you got it after high school as is done traditionally.




  • Going back to school when you’re employed means debt, earning way less or nothing during your bachelor or master, stress, opportunities you’re not aware of because you’re simply not at your workplace anymore

    Don’t quit your day job. Do school in your non-work hours. This is how I did it. I stayed professionally employed and I went back at 30 years old. I did school for about 3 years part-time to get a 2-year Associates degree. Because I went with Community College and did only 1 or 2 classes per term, I never had to take on debt.

    I used that Associates degree and got a better paying job that also came with a tuition reimbursement program. It paid 75% of books and tuition up to a certain dollar figure per year (IRS limit). Again, because I was going to school part-time in my off-hours, I simply never exceeded that IRS limit to extra the maximum reimbursement. I finished by Bachelors degree before turning 40. Again, I graduated with zero debt because I kept my professional employment and used the tuition reimbursement benefit. With that Bachelors degree I was able to get an even better job which lead to significant pay raises in the years that passed.

    So, I disagree with your original premise that going back to school as a working adult has to means unemployment, debt, and loss of income. I’m not going to say what I did was easy, but what I did a little while ago is also still possible today. I have a close friend that is a year older than me that got his Associates around the same time I did using the same “keep your day job, do school partime” method, but he didn’t start his Bachelors when I did. However, he did so later. He graduates, getting his Bachelors, in two months from now!



  • But inexperienced coders will start to use LLMs a lot earlier than the experienced ones do now.

    And unlike you that can pick out a bad method or approach just by looking at the LLM output where you correct it, the inexperienced coder will send the bad code right into git if they can get it to pass a unit test.

    I get your point, but I guess the learning patterns for junior devs will just be totally different while the industry stays open for talent.

    I have no idea what the learning path is going to look like for them. Besides personal hobby projects to get experience, I don’t know who will give them a job when what they produce from their first efforts will be the “bad coder” output that gets replaced by an LLM and a senior dev.

    At least I hope it will and it will not only downsize to 50% of the human workforce.

    I’ve thought about this many times, and I’m just not seeing a path for juniors. Given this new perspective, I’m interested to hear if you can envision something different than I can. I’m honestly looking for alternate views here, I’ve got nothing.


  • It won’t replace good coders but it will replace bad ones because the good ones will be more efficient

    Here’s where we just start touching on the second order problem. Nobody starts as a good coder. We start making horrible code because we don’t know very much, and though years of making mistakes we (hopefully) improve, and become good coders.

    So if AI “replaces bad ones” we’ve effectively ended the pipeline for new coders to enter the workforce. This will be fine for awhile as we have two to three generations of coders that grew up (and became good coders) prior to AI. However, that most recent generation that was pre-AI is that last one. The gate is closed. The ladder pulled up. There won’t be any more young “bad ones” that grow up into good ones. Then the “good ones” will start to die off or retire.

    Carried to its logical conclusion, assuming nothing else changes, then there aren’t any good ones, nor will there every be again.



  • Oh, there’s even more jank in this thing than the reboot workaround described above!

    I have 3 windows displaying different metrics on this display powered by the RPi. Because of the animation of each metric rendered on the display, higher value metrics will consume more CPU. Since each is a separate process, the animation in the displays would be different for each window by without any modifications. So to make each of the 3 display’s animations operate at the same relative speed, I do a calculation of how the number of objects being displayed for the metric, then add an amount of invisible (well, black on black) objects to each window so to equal a fixed amount of the animation speed I want resulting in each window having the exact same number of objects and the animations move at the same speed.

    This works surprisingly well. The only time I have to monkey with the fixed value is if I’m using it on faster or slower Raspberry Pis. For example, I’ll have a lower number of final fixed objects for an RPi 3 rather than a higher number of fixed final objects for a faster RPi 4.



  • My RPi uptime on one project will never exceed 4 hours.

    I’ve got a cron job set to reboot my Raspberry Pi every 4 hours because I wrote a crappy Python app that continuously creates objects during operation that I would have to recreate, but I can’t delete the originals, or rather, I can delete the original parent but the child survives and keeps its memory allocation. So a full reboot with autolaunch of the application on boot is my ugly janky workaround. Its a cosmetic application, nothing critical. Its just a colorful display of data metrics.

    I can hear the horror and gnashing of teeth of real developers as they read this.


  • Ruby was the most approachable language I found and sheparded me from my limits of bash scripting and Windows batch file scripting into the next level.

    The author derides Ruby’s easy readability and syntax because it has issues scaling to large enterprise applications. I don’t disagree there is a performance ceiling, but how many hundreds of thousands of Ruby projects never rose to that level of need? The author is also forgetting that Ruby had Rubygems for easy modular functional additions years before Python eventually got pip.

    I don’t write in Ruby anymore, and Python has evolved to be much more approachable than it was when Ruby was in its prime, however if someone came to me today saying they wanted the easier programming language to learn that could build full applications on Linux, OSX, Windows, and the web, I’d still point them to Ruby with the caveat that it would have limits and they would be better served by Python in the long run.



  • but I think the realistic reading is it was simply a kickback to fortune 500 companies that got these politicians elected.

    If there were no legitimate geopolitical reasons, then the “simply a kickback” would be much more plausible. Also, if it was a single source company, then “simply a kickback” would look true. Additionally, if was perhaps just domestic companies “simply a kickback” would certainly be even more likely. Lastly, the Chips act wasn’t just about production domestically. It also blocked sales/exports of completed high end chips and chip making equipment to China. If the Chips act was “simple a kickback” you wouldn’t do all that other stuff, and you certainly wouldn’t allow foreign winners (like Taiwan’s TSMC).

    Was their rewards because of industry lobbying? Certainly. However, unless you’re in a purely communist system of government where all the companies are owned by the state, you’re always going to have private companies benefiting from government spending, tax breaks, and subsidies. As to this just applying to fortune 500 companies, there isn’t really a “mom and pop” semiconductor industry making handfuls of chips at a time except outside of engineering sample that are used in R&D for fortune 500 companies.


  • The worst of it hasn’t happened yet. The point where consumers can no longer afford to consume is coming.

    Its mostly already arrived.

    “As of June 30, the top 20% of earners accounted for more than 63% of all spending”

    source

    This means that the other 80% of Americans represent only 37% of the spending done today. If a company is looking to maximize profits the typical path is to do so by marketing to the group where they could earn the most money. That is less and less the bottom 80% of Americans.


  • The creator in that video seems to think the Chips Act subsidies were to benefit consumers by having affordable memory produced domestically. That wasn’t the goal. The goal was to derive drive GDP by having another source of domestic production, and drive job growth/tax revenue from workers working at the domestic facility. Lastly, it was to have strategic domestic production decoupled from other nations so we, as a nation, could not be held hostage by another nation (like we do to so many other nations) for crucial (pun very much intended) resources we need.

    Nothing about that is about making RAM cheaper for retail consumers.


  • The promise of “fiber to the home” is still mostly unrealized, but those trunk lines are out there with oodles of “dark fiber” ready to carry data… someday.

    Counterintuitively, I’m seeing “fiber to the home” deployed more in rural an exurb areas. My guess this is because its lower density meaning installing and maintaining copper repeaters becomes more expensive than laying long distance, low maintenance, fiber. Additionally its easier to obtain permits because there is far less existing infrastructure to interfere with right of way and critical services.

    We got fiber to the home in our exurb about 4 years ago here in the USA. Its really cheap too. 500Mb/s is $75, 1Gb/s $100, and 5Gb/s I think is $200 per month.


  • Again I get your point… but no reasonable plumber would make that mistake.

    To extend your analogy, agentic AI isn’t the “reasonable plumber”, its the sketchy guy that says he can fix plumbing and upon arrival he admits he’s a meth addict that hasn’t slept in 3 days and is seeing “the shadow people” standing right there in the room with you.

    I absolutely understand what happened here. The point is there is no benefit to these Agentic AIs because they need to be as supervised as a monkey with a knife… why would I ever want that? let alone need that

    I can see applications for agentic AI, but they can’t be handed the keys to the kingdom. You put them in an indestructible room with a hammer and a pile of rocks and say “please crush any rock I hand you to be no bigger than a walnut and no smaller than an almond”. In IT terms, the agenic AI could run under a restrictive service account so that even if they went off the rails they wouldn’t be able to damage any thing you cared about.