It's an iOS app that applies various generative art effects to your photos, letting you turn your photos into creative animated works of art. It's fully offline, no AI, no subscriptions, no ads, etc.
I'm really proud of it and if you've been in the generative art space for a while you'll instantly recognise many of the techniques I use (circle packing, line walkers, mosaic grid patterns, marching squares, voronoi tessellation, glitch art, string art, perlin flow fields, etc.) pretty much directly inspired by various Coding Train videos.
Hey, I've been getting into visual processing lately and we just started working on an offline wrapper for Apple's vision/other ML libraries via CLI: https://github.com/accretional/macos-vision. You can see some SVG art I created in a screenshot I just posted for a different comment https://i.imgur.com/OEMPJA8.png (on the right is a cubist plato svg lol)
Since your app is fully offline I'd love to chat about photogenesis/your general work in this area since there may be a good opportunity for collaboration. I've been working on some image stuff and want to build a local desktop/web application, here are some UI mockups of that I've been playing with (many AI generated though some of the features are functional, I realized that with CSS/SVG masks you can do a ton more than you'd expect): https://i.imgur.com/SFOX4wB.pnghttps://i.imgur.com/sPKRRTx.png but we don't have all the ui/vision expertise we'd need to take them to completion most likely.
I started out in all the usual ways - inspired by Daniel Shiffman making generative art first using Processing, then p5.js, and now mostly I create art by writing shaders. Recently after being laid off from my job, I actually took my obsession further and released my very first mobile app - https://www.photogenesis.app - as a homage to generative art.
It's an app that applies various generative effects/techniques to your photos, letting you turn your photos into art (not using AI). I'm really proud of it and if you've been in the generative art space for a while you'll instantly recognise many of the techniques I use (circle packing, line walkers, mosaic grid patterns, marching squares, voronoi tessellation, etc.) pretty much directly inspired by various Coding Train videos.
I love the generative art space and plan to spend a lot more time coming up doing things in this area (as long as I can afford it) :-)
Just wait until we start to see the full impact of AI on learning. I suspect the results are going to be so catastrophic that there will actually be attempts to hide it.
eg. See [1] which finds:
"The report shows a rapid change over just five years. Between 2020 and 2025, the number of incoming students whose math skills were below high school level rose nearly thirtyfold and 70% of those students fell below middle school levels. This roughly translates to about one in twelve members of the freshman class."
and
"high school math grades are only very weakly linked to students’ actual math preparation."
There is simply no way you can dangle an automatic homework and assignment solver in front of kids and not absolutely destroy their motivation, desire, and ability to learn.
Totally. We can't really measure the effect on people graduating from college right now but I'm pretty sure the value of the average college education is down since the advent of AI due to mass cheating and professors having to tailor their classes away from things AI can take advantage of. The students who love to learn will still be doing just fine, but the others - I doubt it.
I used to also believe along these lines but lately I'm not so sure.
I'm honestly shocked by the latest results we're seeing with Gemini 3 Deep Think, Opus 4.6, and Codex 5.3 in math, coding, abstract reasoning, etc. Deep Think just scored 84.6% on ARC-AGI-2 (https://deepmind.google/models/gemini/)! And these benchmarks are supported by my own experimentation and testing with these models ~ specifically most recently with Opus 4.6 doing things I would have never thought possible in codebases I'm working in.
These models are demonstrating an incredible capacity for logical abstract reasoning of a level far greater than 99.9% of the world's population.
And then combine that with the latest video output we're seeing from Seedance 2.0, etc showing an incredible level of image/video understanding and generation capability.
I was previously deeply skeptical that the architecture we have would be sufficient to get us to AGI. But my belief in that has been strongly rattled lately. Honestly I think the greatest gap now is simply one of orchestration, data presentation, and work around in-context memory representations - that is, converting work done into real world into formats/representations, etc. amenable for AI to run on (text conversion, etc.) and keeping new trained/taught information in context to support continual learning.
>These models are demonstrating an incredible capacity for logical abstract reasoning of a level far greater than 99.9% of the world's population.
This is the key I think that Altman and Amodei see, but get buried in hype accusations. The frontier models absolutely blow away the majority of people on simple general tasks and reasoning. Run the last 50 decisions I've seen locally through Opus 4.6 or ChatGPT 5.2 and I might conclude I'd rather work with an AI than the human intelligence.
It's a soft threshold where I think people saw it spit out some answers during the chat-to-LLM first hype wave and missed that the majority of white collar work (I mean it all, not just the top software industry architects and senior SWEs) seems to come out better when a human is pushed further out of the loop. Humans are useful for spreading out responsibility and accountability, for now, thankfully.
LLMs are very good at logical reasoning in bounded systems. They lack the wisdom to deal with unbounded systems efficiently, because they don't have a good sense of what they don't know or good priors on the distribution of the unexpected. I expect this will be very difficult to RL in.
> These models are demonstrating an incredible capacity for logical abstract reasoning of a level far greater than 99.9% of the world's population.
And yet they have trouble knowing that a person should take their car to a car wash.
I also saw a college professor who put various AI models through all his exams for a freshman(?) level class. Most failed, I think one barely passed, if I remember correctly.
I’ve been reading about people being shocked by how good things are for years now, but while there may be moments of what seems like incredible brilliance, there are also moments of profound stupidity. AI optimists seem to ignore these moments, but they very real.
If someone on my team performed like AI, I wouldn’t trust them with anything.
My fundamental argument: The way the average person is using AI today is as "Thinking as a Service" and this is going to have absolutely devastating long term consequences, training an entire generation not to think for themselves.
I think you hit the nail on the head. Without years of learning by doing, experience in the saddle as you put it, who would be equipped to judge or edit the output of AI? And as knowledge workers with hands-on experience age out of the workforce, who will replace us?
The critical difference between AI and a tool like a calculator, to me, is that a calculator's output is accurate, deterministic and provably true. We don't usually need to worry that a calculator might be giving us the wrong result, or an inferior result. It simply gives us an objective fact. Whereas the output of LLMs can be subjectively considered good or bad - even when it is accurate.
So imagine teaching an architecture student to draw plans for a house, with a calculator that spit out incorrect values 20% of the time, or silently developed an opinion about the height of countertops. You'd not just have a structurally unsound plan, you'd also have a student who'd failed to learn anything useful.
> The critical difference between AI and a tool like a calculator, to me, is that a calculator's output is accurate, deterministic and provably true.
This really resonates with me.
If calculators returned even 99.9% correct answers, it would be impossible to reliably build even small buildings with them.
We are using AI for a lot of small tasks inside big systems, or even for designing the entire architecture, and we still need to validate the answers by ourselves, at least for the foreseeable future.
But outsourcing thinking reduces a lot of brain powers to do that, because it often requires understanding problems' detailed structure and internal thinking path.
In current situation, by vibing and YOLOing most problems, we are losing the very ability we still need and can't replace with AI or other tools.
If you don't have building codes, you can totally yolo build a small house, no calculator needed. It may not be a great house, just like vibeware may not be great, but also, you have something.
I'm not saying this is ideal, but maybe there's another perspective to consider as well, which is lowering barriers to entry and increased ownership.
Many people can't/won't/don't do what it takes to build things, be it a house or an app, if they're starting from zero knowledge. But if you provide a simple guide they can follow, they might end actually building something. They'll learn a little along the way, make it theirs, and end up with ownership of their thing. As an owner, change comes from you, and so you learn a bit more about your thing.
Obviously whatever gets built by a noob isn't likely to be of the same caliber as a professional who spent half their life in school and job training, but that might be ok. DIY is a great teacher and motivator to continue learning.
Contrast to high barriers to entry, where nothing gets built and nothing gets learned, and the user is left dependent on the powers that be to get what he wants, probably overpriced, and with features he never wanted.
If you're a rocket surgeon and suddenly outsource all your thinking to a new and unpredictable machine, while you get fat and lazy watching tv, that's on you. But for a lot of people who were never going to put in years of preparation just to do a thing, vibing their idea may be a catalyst for positive change.
To continue the analogy, there’s something called renting and the range of choices. If there’s no code and you can’t build your own house, you’re left with bad houses built by someone else. It’s more likely to be bad when the owner already knows he will not be living in them as building it right can be expensive and time consuming.
When slop becomes easier, there are a lot more people ready to push it to others than people that tries to produce guenuine work. Especially when theh are hard to distinguish superficially.
There's another category error compounding this issue: People think that because past revolutions in technology eventually led to higher living standards after periods of disruption, this one will too. I think this one is the exception for the reasons enumerated by the parent's blog post.
In point of fact, most technological revolutions have fairly immediately benefited a significant number of people in addition to those in the top 1% -- either by increasing demand for labor, or reducing the price of goods, or both.
The promise of LLMs is that they benefit people in the top 1% (investors and highly paid specialists) by reducing the demand for labor to produce the same stuff that was already being produced. There is an incidental initial increase in (or perhaps just reallocation of) labor to build out infrastructure, but that is possibly quite short-lived, and simultaneously drives a huge increase in the cost of electricity, buildings, and computer-related goods.
But the benefits of new technologies are never spread evenly.
When the technology of travel made remote destinations more accessible, it created tourist traps. Some well placed individuals and companies do well out of this, but typically, most people living near tourist traps suffer from the crowds and increased prices.
When power plants are built, neighbors suffer noise and pollution, but other people can turn their lights on.
We haven't yet begun to be able to calculate all the negative externalities of LLMs.
I would not be surpised if the best negative externality comparisons were to the work of Thomas Midgley, who gifted the world both leaded gasoline and CFC refrigerants.
It's funny, I'm working on trying to get LLMs to place electrical devices, and it silently developed opinions that my switches above countertops should be at 4 feet and not the 3'10 I'm asking for (the top cannot be above 4')
That's quite funny, and almost astonishing, because I'm not an architect, and that scenario just came out of my head randomly as I wrote it. It seemed like something an architect friend of mine who passed away recently, and was a big fan of Douglas Adams, would have joked about. Maybe I just channeled him from the afterlife, and maybe he's also laughing about it.
On the whole, not trusting one's own tools is a regression, not an advancement. The cognitive load it imposes on even the most capable and careful person can lead to all sorts of downstream effects.
There's an Isaac Asimov story where people are "educated" by programming knowledge into their brains, Matrix style.
A certain group of people have something wrong with their brain where they can't be "educated" and are forced to learn by studying and such. The protagonist of the story is one of these people and feels ashamed at his disability and how everyone around him effortlessly knows things he has to struggle to learn.
He finds out (SPOILER) that he was actually selected for a "priesthood" of creative/problem solvers, because the education process gives knowledge without the ability to apply it creatively. It allows people to rapidly and easily be trained on some process but not the ability to reason it out.
That would have devastating consequences in the pre-LLM era, yes. What is less obvious is whether it'll be an advantage or disadvantage going forward. It is like observing that cars will make people fat and lazy and have devastating consequences on health outcomes - that is exactly what happened but the net impact was still positive because cars boost wealth, lifestyles and access to healthcare so much that the net impact is probably positive even if people get less exercise.
It is unclear that a human thinking about things is going to be an advantage in 10, 20 years. Might be, might not be. In 50 years people will probably be outraged if a human makes an important decision without deferring to an LLM's opinion. I'm quite excited that we seem to be building scaleable superintelligences that can patiently and empathetically explain why people are making stupid political choices and what policy prescriptions would actually get a good outcome based on reading all the available statistical and theoretical literature. Screw people primarily thinking for themselves on that topic, the public has no idea.
Eh 1953 was more about what’s going to happen to the people left behind, e.g. Childhood’s End. The vast majority of people will be better off having the market-winning AI tell them what to do.
Or how about that vast majority gets a decent education and higher standard of living so they can spend time learning and thinking on their own? You and a lot of folks seem to take for granted our unjust economy and its consequences, when we could easily change it.
How is that relevant? You can give whatever support you like to humans, but machine learning is doing the same thing in general cognition that it has done in every competitive game. It doesn't matter how much education the humans get - if they try to make complex decisions using their brain then, silicon will outperform them at planning to achieve desirable outcomes. Material prosperity is a desirable outcome, machines will be able to plot a better path to it than some trained monkey. The only question is how long it'll take to resolve the engineering challenges.
There are some facts which makes it not outside the realm of possibility. Like computers being better at chess and go and giving directions to places or doing puzzles. (The picture-on-cardboard variety.)
I think the comparison to giving change is a good one, especially given how frequently the LLM hype crowd uses the fictitious "calculator in your pocket" story. I've been in the exact situation you've described, long before LLMs came out and cashiers have had calculators in front of them for longer than we've had smartphones.
I'll add another analogy. I tell people when I tip I "round off to the nearest dollar, move the decimal place (10%), and multiply by 2" (generating a tip that will be in the ballpark of 18%), and am always told "that's too complicated". It's a 3 step process where the hardest thing is multiplying a number by 2 (and usually a 2 digit number...). It's always struck me as odd that the response is that this is too complicated rather than a nice tip (pun intended) for figuring out how much to tip quickly and with essentially zero thinking. If any of those three steps appear difficult to you then your math skills are below that of elementary school.
I also see a problem with how we look at math and coding. I hear so often "abstraction is bad" yet, that is all coding (and math) is. It is fundamentally abstraction. The ability to abstract is what makes humans human. All creatures abstract, it is a necessary component of intelligence, but humans certainly have a unique capacity for it. Abstraction is no doubt hard, but when in life was anything worth doing easy? I think we unfortunately are willing to put significantly more effort into justifying our laziness than we will to be not lazy. My fear is that we will abdicate doing worthwhile things because they are hard. It's a thing people do every day. So many people love to outsource their thinking. Be it to a calculator, Google, "the algorithm", their favorite political pundit, religion, or anything else. Anything to abdicate responsibility. Anything to abdicate effort.
So I think AI is going to be no different from calculators, as you suggest. They can be great tools to help people do so much. But it will be far more commonly used to outsource thinking, even by many people considered intelligent. Skills atrophy. It's as simple as that.
I briefly taught a beginner CS course over a decade ago, and at the time it was already surprising and disappointing how many of my students would reach for a calculator to do single-digit arithmetic; something that was a requirement to be committed to memory when I was still in school. Not surprisingly, teaching them binary and hex was extremely frustrating.
I tell people when I tip I "round off to the nearest dollar, move the decimal place (10%), and multiply by 2" (generating a tip that will be in the ballpark of 18%), and am always told "that's too complicated".
I would tell others to "shift right once, then divide by 2 and add" for 15%, and get the same response.
However, I'm not so sure what you mean by a problem with thinking that abstraction is bad. Yes, abstraction is bad --- because it is a way to hide and obscure the actual details, and one could argue that such dependence on opaque things, just like a calculator or AI, is the actual problem.
I'm sorry, I think you are teaching people the wrong thing if you are blanket statement saying "abstraction is bad". You are throwing the baby out with the bath water. You can "over abstract" and that certainly is not good but that's not easy to define as it is extremely problem dependent. But with these absurd blanket statements you just push code quality and performance down.
Over abstraction is bad because it can be too difficult to read or it can be bad because it de-optimizes programs. "Too difficult to read or maintain" is ultimately a skill issue. We don't let the juniors decide that but neither should we have abstraction where only wizards can maintain things. Both are errors.
But abstraction can also greatly increase readability and help maintain code. It's the reason we use functions. It's the reason we use OOP. It helps optimize code, it can help reduce writing, it can and does do many beneficial things.
Lumping everything together is just harmful.
Saying abstraction is bad is no different than saying "python is bad", or any duck typing language (including C++'s auto), because you're using an abstract data type. The "higher level" the language, the more abstract it is.
Saying abstraction is bad is no different than saying templates are bad.
Saying abstraction is bad is no different than saying object oriented programming is bad.
Saying abstraction is bad is saying coding is bad.
I'm sorry, literally everything we do is abstraction. Conflating "over abstraction" with "abstraction" is just as grave an error as the misrepresentation of Knuth's "premature optimization is the root of all evil." Dude said "grab a fucking profiler" and everyone heard "don't waste time making things work better".
If you want to minimize abstraction then you can go write machine code. Anything short of that has abstracted away many actions and operations. I'll admire your skill but this is a path I will never follow nor recommend. Abstraction is necessary and our ability to abstract is foundational into making code even work.
*I will die on this hill*
> because it is a way to hide and obscure the actual details
That's not abstraction, that obfuscation. Do not conflate these things.
> one could argue that such dependence on opaque things, just like a calculator or AI, is the actual problem.
I believe that collectively we passed that point long before the onset of LLMs. I have a feeling that throughout the human history vast amounts of people ware happy to outsource their thinking and even pay to do so. We just used to call those arrangements religions.
Religions may outsource opinions on morality, but no one went to their spiritual leader to ask about the Pythagorean theorem or the population of Zimbabwe.
Obviously I was using the Pythagorean theorem as a random not literal example. But I’m also curious about what you mean. Mind linking to the specific relevant parts? Linking to humongous articles doesn’t help much.
I was linking it partially tongue in cheek, but oracles and the auspices in antiquity were specifically not about morality. They were about predicting the future. If you wanted to know if you should invade Carthage on a certain day, you'd check the chickens. Literally. And plenty of medical practices were steeped in religious fare, too. If you go back further, a lot of shamanistic practices divine the facts about the present reality. In the words of Terrence McKenna, "[Shamans] cure disease (and another way of putting that is: they have a remarkable facility for choosing patients who will recover), they predict weather (very important), they tell where game has gone, the movement of game, and they seem to have a paranormal ability to look into questions, as I mentioned, who’s sleeping with who, who stole the chicken, who—you know, social transgressions are an open book to them." All very much dealing with facts, not morality.
> The cosmos of the acusmata, however, clearly shows a belief in a world structured according to mathematics, and some of the evidence for this belief may have been drawn from genuine mathematical truths such as those embodied in the “Pythagorean” theorem and the relation of whole number ratios to musical concords.
There are numerous sections throughout both of these entries that discuss Pythagoras, mathematics, and religion. Plato too is another fruitful avenue, if you wanted to explore that further.
That’s a bit cynical. Religion is more like a technology. It was continuously invented to solve problems and increase capacity. Newer religions superseded older and survived based on productive and coercive supremacy.
If religion is a technology, it's inarguably one that prevented the development of a lot of other technologies for long periods of time. Whether that was a good thing is open to interpretation.
On the other hand it produced a lot of related technology. Calendars, mathematics, writing, agricultural practices, government and economic systems. Most of this stuff emerged as an effort to document and proliferate spiritual ideas.
I see your point, but I'd say religion's main technological purpose is as a storage system for the encoding of other technologies (and social patterns) into rituals, the reasons for which don't need to be understood; to the point that it actively discourages examination of their reasons, as what we could call an error-checking protocol. So a religion tends to freeze those technologies in the time at the point of inception, and to treat any reexamining of them as heresy. Calendars are useful for iron age farming, but you can't get past a certain point as a civilization if you're unwilling to reconsider your position that the sun and stars revolve around the earth, for example.
I think it is hard to fully remove religious practice from species. I think it exist along a spectrum and that there are base ritualistic behaviors most animals engage with (e.g. a pets ritual around eating or play), organized social order sort of rituals (e.g. birds expecting a particular mating dance performed well and this sensibility shared among the local group of birds), and finally what we observe in our own development as a species, higher religion, but that is merely iteratively developed from layering these simple behaviors onto simple behaviors until the whole is quite elaborate in fact.
In that sense I think getting caught up in “religion bad for tech” zeitgeist misses the point of what religion actually is. Collectively shared ritual. Belief in God, and specific shades of that, is just the step of the dance the bird does in this case. Taking a step back, plenty of atheists engage in collectively shared ritual too. Belief in the 9-5, the bludgeon that is the four years to specialize vs lifelong apprentanceships towards true mastery, economics constraining choice rather than pure skill. Do these rituals not also hold our species and technological development back? If we talk about religion, it is worth also considering the mountain of other blockers towards progress we have built for ourselves in this collectively agreed upon daily society ritual we all partake upon.
I'll say that I'm still kinda on the fence here, but I will point out that your argument is exactly the same as the argument against calculators back in the 70s/80s, computers and the internet in the 90s, etc.
You could argue that a lot of the people who few up with calculators have lost any kind of mathematical intuition. I am always horrified how bad a lot of people are with simple math, interest rates and other things. This definitely opened up a lot of opportunities for companies to exploit this ignorance.
The difference is a calculator always returns 2+2=4. And even then if you ended up with 6 instead of 4, the fact you know how to do addition already leads you to believe you fat fingered the last entry and that 2+2 does not equal 6.
Can’t say the same for LLM. Our teachers were right with the internet of course as well. If you remember those early internet wild west school days, no one was using the internet to actually look up a good source. No one even knew what that meant. Teachers had to say “cite from these works or references we discussed in class” or they’d get junk back.
Right so apply the exact same logic to LLMs as you did to the internet.
At first the internet was unreliable. Nobody could trust the information it gave you. So teachers insisted that students only use their trusted sources. But eventually the internet matured and now it would be seen as ridiculous for a teacher to tell a student not to do research on the internet.
Most teachers would never let you grab any random internet source. We always had to get decent sources. Actual journal articles from our library’s JSTOR subscription would often be a hard requirement for a certain number of sources. Citing the text we used in class or other reference material we had access to as well. It was never free rein anything goes, unless that has changed.
I didn't mean to imply otherwise. Only to point out that in the early days of the internet, even into the 00s, teachers had a Hard No rule on any internet source.
I graduated high school in '04 and even then I was only allowed to use this system called "Galileo" which was basically a curated listed of encyclopedic articles specifically meant for education and research.
On the one hand, yeah, immigration and trade issues push the buttons of the hard right.
On the other hand, our hard right has a trifecta of business, gun culture, and religion.
You're lacking the religion and gun culture, and trying to take away your health care would be the third rail, so in some respects, it would be difficult for you to follow us.
Also, without that trifecta, it seems that it would be somewhat more difficult to push the sort of anti-education agenda that gets pushed here, both at the university level and at lower levels (e.g. giving equal time to science and creationism).
You have to remember that the US was, in large part, founded by dogmatic malcontents who couldn't get along with their neighbors.
Too late. Outsourcing has already accomplished this.
No one is making cool shit for themselves. Everyone is held hostage ensuring Wall Street growth.
The "cross our fingers and hope for the best" position we find ourselves in politically is entirely due to labor capture.
The US benefited from a social network topology of small businesses. No single business being a lynch pin that would implode everything.
Now the economy is a handful of too big to fails eroding links between human nodes by capturing our agency.
I argued as hard as I could against shipping electronics manufacturing overseas so the next generation would learn real engineering skills. But 20 something me had no idea how far up the political tree the decision was made back then. I helped train a bunch of people's replacements before the telecom focused network hardware manufacturer I worked for then shut down.
American tech workers are now primarily cloud configurators and that's being automated away.
This is a decades long play on the part of aging leadership to ensure Americans feel their only choice is capitulate.
What are we going to do, start our own manufacturing business? Muricans are fish in a barrel.
I actually disagree with Andrej here re: "Generation (writing code) and discrimination (reading code) are different capabilities in the brain." and I would argue that the only reason he can read code fluently, find issues, etc. is because he has spent year in a non-AI assisted world writing code. As time goes on, he will become substantially worse.
This also bodes incredibly poorly for the next generation, who will mostly in their formative years now avoid writing code and thus fail to even develop a idea of what good code is, how it works/why it works, why you make certain decisions, and not others, etc. and ultimately you will see them become utterly dependent on AI, unable to make progress without it.
IMO outsourcing thinking is going to have incredibly negative consequences for the world at large.
Is coding like piloting, where pilots need a certain number of hours of "flight time" to gain skills, and then a certain number of additional hours each year to maintain their skills? Do developers need to schedule in a certain number of "manually written lines of code" every year?
Read your blog post and agree with some of it. Largely I agree with the premise that the 2nd and 3rd order effects of this technology will be more impactful than the 1st order “I was able to code this app I wouldn’t have otherwise even attempted to”. But they are so hard to predict!
Thanks, this rings true to me. The struggle is an investment, and it pays off in good judgement and taste. The same goes for individual codebases too. When I see some weird bug and can immediately guess what’s going wrong and why, that’s my time spent in that codebase paying off. I guess LLM-ing a feature is the inverse, incurring some kind of cognitive debt.
I was laid off recently along with most of the tech team (Australian company ~ very well known brand). There's a handful of people left, but even they know their time is coming soon.
And this isn't about AI (well, not primarily anyway). It's offshoring, offshoring, offshoring.
IMO, what's taking place now is absolutely transformative and the world economy is in the process of being reshaped. It's not just tech jobs that are being offshored - we're just one of the first/early movers. Many other professional/white-collar jobs (accounting, etc.) are also getting offshored at an accelerating rate. And it's happening all over the western world - it's happening in the US, it's happening in Australia, Canada, the UK, etc.
And unlike previous periods of mass offshoring, I don't think the jobs are ever coming back.
These new tech companies/existing companies were not here for the first wave of offshoring engineers many years ago. basically, the product/service degraded and they brought the product/service back onshore.
It's a cycle that will repeat. Product degrades, there will be public outrage, then they will onshore the product to fix the problems caused from offshoring.
IMO, things are different this time (as someone who has been in this industry for about 20 years now) and I don't see these jobs coming back.
For one, many of these companies are now used to their tech teams being remote. The tools, culture, infra, etc. over the last ~5 years has all become remote which lessens the shock of going fully offshore.
Two, many tech teams in the western world are already partially offshored and have been for some time now. I know where I worked, a reasonable % of the team was already offshore in low COL countries (India, etc.). What's happening now is just the expansion of that cost saving after initial testing of the waters was successful.
Three, the quality gap between offshore teams and their western counterparts is now much smaller, and AI will be used to lessen the gap even further (along with just throwing more bodies at each problem which you can do when your salaries are 1/3rd of what they are here).
Four, many products/services now have captured markets with strong network effects, which means they can weather a heavy degradation of services with little to no loss of customers. It's called enshittification, and businesses are doing it now because they absolutely know they can, and get away with it.
I think in the very long term though what will happen is countries like India will actually end up with salaries comparable to western workers, so even though the gap might be smaller, the cost/benefit ratio will change again.
That happened during the last offshoring hype cycle as well. Those Indian developers aren't stupid -- the ones who deliver quality work will soon move somewhere they can earn a salary to reflect that, and it will be comparable to a US/UK/EU salary. Companies who insist on sticking to low salaries are left with the worst people.
I worked with some very good offshore engineers. They all left pretty quickly for a job with double the salary, or moved abroad outright to claim it. The only ones who stuck around were the ones whose poor skills kept them from landing a better job.
It's also great for productivity when your offshore team is a rapidly rotating cast. I remember being in meetings where without any announcement, half of the developers who had slowly started to get to know the project were replaced with new faces who had no idea what they were supposed to do.
It is interesting that a similar thing happened when millions of manufacturing jobs were offshored to China and other low-cost sites. Now it has come to the tech industry, and every other industry where it is possible. It hurts when it comes to us. (Speaking as a person in tech in high COL). I hope we will find other roles as people found when they moved on from manufacturing roles too
Yep, sold to DAZN. Who in turn do all their development out of Hyderabad, India. Almost entire tech team locally is gone now with handover done to that team.
I've always found it interesting that on Hacker News, articles like this pretty much fly by without commentary and are routinely downvoted, while articles speculating about the inevitable doom of mankind due to AI generate hundreds of comments and lively discussion.
We're so in search of novelty, we ignore the bus steadily making its way straight towards us as we stand in the middle of the road, doing absolutely nothing - and with no hint that anything or anyone will come to save us - and instead we keep reading tea leaves and imagining more fascinating and wonderful dangers that have a near zero chance of manifesting before the bus hits us.
To some extent, the bus ending is just too boring it seems for anyone to really become engaged by it - narratively speaking.
The other bus ending (buses always travel in packs, as any bus commuter knows) is fertility collapse. People are just not getting around to having children. That's a big yawn too. How do you make a drama out of nothing happening?
We're not going to make it to anywhere close to AGI before we see widespread and systematic societal and environmental collapse on almost all fronts due to climate change.
If you're scared of AGI, instead step away from your monitor, put down the techno-goggles and sci-fi books, and go educate yourself a bit about the profound ways we are changing the natural world for the worse _right now_
I can recommend a couple of books if you'd like to learn more:
You'll be glad to hear that the recent sudden jump in sea temperatures isn't caused by carbon dioxide. Turns out it just hasn't been windy enough in the Sahara lately (and environmentally friendly boat fuel may have contributed as well).
https://heatmap.news/climate/why-is-the-atlantic-ocean-so-ho...
People really seem to enjoy the evil/powerful AGI scenario though. It's definitely a fun one. I would recommend reading "Metamorphosis of Prime Intellect", having a nice time with that, then coming back to the real world.
It's an iOS app that applies various generative art effects to your photos, letting you turn your photos into creative animated works of art. It's fully offline, no AI, no subscriptions, no ads, etc.
I'm really proud of it and if you've been in the generative art space for a while you'll instantly recognise many of the techniques I use (circle packing, line walkers, mosaic grid patterns, marching squares, voronoi tessellation, glitch art, string art, perlin flow fields, etc.) pretty much directly inspired by various Coding Train videos.
Direct download link on the App Store is https://apps.apple.com/us/app/photogenesis-photo-art/id67597... if you want to try it out.
* Coming to Android soon too.
reply