but it pretty much only works for JavaScript programmers and their friends (or folks interested in learning JavaScript).
Other tools which I'd like to put forward as meriting discussion in this context include:
- LyX --- making new layout files allows a user to create a customized tool for pretty much any sort of document they might wish to work on --- a front-end for LaTeX
- pyspread --- every cell being either a Python program or the output of a program, and the possibility of cells being an image allows one to do pretty much anything without the overhead of making or reading a file
- Ipe https://ipe.otfried.org/ --- an extensible drawing program, this really needs a simpler mechanism for that and I'd love to see a tool in the vector drawing space which addressed that --- perhaps the nascent https://graphite.rs/ ?
Very pleased to see LyX and Ipe here. They've been invaluable throughout my academic career, and are just a pleasure to use (once you get the hang of them).
The Qt/KDE world has (imho) some of the best quality software I've used, and is, astonishingly, relatively unpopular compared to FOSS competitors.
Ipe now has a web interface (through the magic of Qt) and I remember there was a plan to make one for LyX, though if it ever happened, I couldn't find it.
Neat —- Scrappy looks like a lovely prototype! As the creators say in their writeup, it fits nicely into the lineage of HyperCard-style “media with optional scripting” editors, which provide a gentle slope into programming.
In the section on dynamic documents towards the end of our essay, we show several of our lab’s own takes on this category of tool, including an example of integrating AI as an optional layer over a live programmable document.
Yes, absolutely, even trivial things like colors can rarely be changed, let alone more involved UI parts.
> Inflexible electronic medical records systems are driving doctors to burnout. > When different users have different needs, a centralized development team can’t possibly address everyone’s problems.
That's not the main issue, which is that they don't address *anyone's* problems well since actual users have very little power here, and the devs are far removed from actual user experience. Like that examples of filling useless fields - that serves no one!
> when a developer does try to cram too many solutions into a single product, the result is a bloated mess.
Unless it's organized well? There is no inherent reason many equals mess or even bloat (eg, if solutions are modules you can ignore/not even install, your app with only the solutions you care about has no bloat)
But in general, very laudable goals, would be very empowering for many users to live in a dream world where software is built based on such principles...
I appreciate the idea behind the post, because certainly, we need more hackable apps now that everything is becoming a SaaS that effectively cannot be archived or hacked on (unlike, say, WinAmp or major releases of Windows and their respective fan updates, or for a more common example game mods).
Unfortunately I think that while there’s a decent number of power users and people who have the aptitude to become power users who will make use of software made to be deeply customizable, they are outstripped many times over by people who don’t see software that way and have no interest in learning about it. People are quick to point fingers about why the situation is as it is, but the truth is that it was always going to be this way once computers became widely adopted. It’s no different from how most people who drive cars can’t work on them and why few feel comfortable making modifications to their houses/apartments. There’s just a hard limit to the scope and depth of the average individual's attention, and more often than not technical specialization doesn’t make the cut. No amount of gentle ramping will work around this.
That doesn’t mean we shouldn’t build flexible software… by all means, please do, but I wouldn’t expect it to unseat the Microsofts and Googles of the world any time soon. I do however think that technically capable people should do anything they can to further the development of not just flexible, but local-first, hackable software. Anything that’s hard-tethered to a server should be out of the running entirely and something you can keep running on your machine regardless of the fate of its developer should take priority over more ephemeral options.
Pretty much everyone makes modifications to their homes— arranging furniture, choosing decorations, storing tools and implements and books and...
I've been to hotel rooms that looked identical to each other. I've never been to anybody's long-term home that wasn't unique—and unique in obvious, personalized ways. Even the most regularized housing ends up unique: I've visited everything from US dorm rooms to ex-Soviet housing blocks to cookie-cutter HOA-invested suburbs and yet, rules and norms aside, folks' private spaces were always unique, adapted through both conscious action and by unconscious day-to-day habits.
Just because 90% of these modifications did not need more DIY tools than the occasional hammer and nail does not mean they don't "count". That just shows that reducing friction, risk and skill requirements matters.
Gentle ramping helps in two ways. For people who would be inclined to get into more "advanced" modifications, it lowers the activation energy needed and makes it easier to learn the necessary skills. But even for people who would not be inclined to go "all the way", it still helps them make more involved modifications than they would otherwise. A system with natural affordances to adaptation lets people make the changes they want with less thought and attention than they would otherwise need—the design of the system itself takes on some of the cognitive load for them.
With physical objects like home furniture, the affordances stem from the physical nature of the item and the environment. With software, the affordances—or lack thereof—stem entirely from the software's design.
Mainstream software systems are clearly not designed to be adaptable, but we should not take this as a signal about human nature. Large, quasi-monopolistic companies are driven by scalability, legibility and control far more than user empowerment or adaptability. And most people get stuck with these systems less because they prefer the design and more because there are structural and legal obstacles to switching. The obstacles are surmountable—you can absolutely use a customizing Linux desktop day-to-day, I do!—but they add real friction. And, as we repeatedly see through both research and observation, friction makes a big difference to most people. Friction has an outsize impact not because of people's immutable preferences but, as you said, because people have finite pools of time and attention with too many demands to do everything.
I am making an offline-first thing that serialised to a single file that can be opened without a local Fileserver.
Still working on the UX a little but it seems close to what you want (and I agree). The vision statement is about creating immortal software exactly to fight bitrot
https://github.com/tomlarkworthy/lopecode
Well with cars I think many would appreciate if they too were more malleable. My Dad has often told me of a car he once had that was really easy to repair (edit: it was a VW Beetle) as he was not known as someone who was terribly handy. Doubtful anyone would have that experience with today's cars.
> It’s no different from how most people who drive cars can’t work on them
What if instead of cars and driving, we use reading and writing as the metaphor for the kind of media/utility computing can have. I'd argue it them changes the whole nature of the argument.
Yeah, I'm gonna have to disagree with that. Computer users are shaped by the tools that they have available to them. There is always going to be a varying of degrees on how much a user gets in customization, but when the environment is designed around customization, users end up using the tools that are given to them.
I’m not disagreeing with you, but rather suggesting that the ceiling for how much the average user can/will leverage customization is surprisingly low.
If we’re looking for levers to pull to help more people become advanced computer users, I believe progressive disclosure combined with design that takes advantage of natural human inclinations (association, spatial memory, etc) are much more powerful. Some of the most effective power users I’ve come across weren’t “tech people” but instead those who’d used iMac for 5-10 years doing photography or audio editing or whatever and had picked up all of the little productivity boosters scattered around the system ready for the user to discover at just the right time.
With that in mind, I think the biggest contributor to reduced computer literacy is actually the direction software design has taken in the past 10-15 years, where proper UI designers have been replaced with anybody who can cobble a mockup together in photoshop, resulting in vast amount of research being thrown out in favor of dribbble trends and vibes. The result is UI that isn’t humanist, doesn’t care to help the user grow, and is made only with looking pretty in slideshows and marketing copy in mind.
> I’m not disagreeing with you, but rather suggesting that the ceiling for how much the average user can/will leverage customization is surprisingly low.
The average person is also a crappy writer, bad musician and lousy carpenter. But a notepad and a pen don’t tell me how to use them. They don’t limit my creative capacity. Same story with a piano, or a hammer and chisel. I wish computers were more like that.
Your point stands. Most notebook users never use it to write a bestselling novel, or draw like Picasso. But the invitation to try is still in the medium somehow. Just waiting for the right hand.
I agree with the rest of your comment. As software engineers, we could build any software we want for ourselves. It’s telling that we choose to use tools like git and IntelliJ. Stuff that takes months or years to master. I think it’s weirdly perverted to imagine the best software for everyone else is maximally dumbed down. Thats not what users want.
Rather than aiming for “software that is easy to use” I think we should be aiming for “software that rewards you for learning”. At least, in creative work. I’m personally far more interested in making the software equivalent of piano than I am in making the software equivalent of a television set.
Give me Delphi. No, seriously, give me Delphi, but for the web and in a modern popular programming language. Python would be great, but I will not turn my nose away from TypeScript or Go or Lua.
For those who don’t know, Delphi was (is?) a visual constructor for Windows apps that you outfitted with a dialect of Pascal. It was effing magic!
Nowadays the web ecosystem is so fast-paced and so fragmented, the choice is paralyzing, confidence is low. The amount of scaffolding I have to do is insane. There are tools, yes, cookie cutters, npx’s, CRAs, copilots and Cursors that will confidently spew tons of code but quickly leave you alone with this mess.
I've been playing around with the Godot game engine as a sort of modern successor to Delphi / Lazarus. I'm currently messing around with trying to create some database server management software using it.
GDScript is pretty similar in feel to Python, and you can also use C# if you want to. It has some level of GUI controls in the framework (not sure how many yet, but all of the GUI controls used to build the editor are available for use).
I want to believe the 3d capabilities might be useful for some kind of UI stuff, but I don't really have a real idea how to make that work - just a "wouldn't it be neat if..." question about it right now.
I have managed to make some pretty incredible tools, it definitely feels like magic.
I would say I split my time 70% using BMAD as an assistant to build out my scope and clarify what I am trying to do in my own head, then 30% supervising Claude Code.
I have also managed to build out more simple tools using Streamlit to great effect
Your best bet might be using blazor with a rad development tool, though I haven't tried it. blazor notably has devexpress components, which is what makes delphi tolerable.
I've reverse-engineered a couple of programs before in order to get them to do what I want. Things like setting default options in a program that doesn't let you change the defaults, or getting older Windows programs to function correctly.
I've also patched open-source programs locally in order to get them to do what I want but wouldn't be suitable for upstreaming. For example, I've reverted the order of buttons in a "do you want to save?" close dialog when they changed in an update.
Minor stuff, but just being able to do this is amazing. The trouble is, developers - at least those of closed-source programs - don't want you to be able to do that, partially due to a lot of them relying on security by obscurity in order to earn money.
As such, it feels like the only way you're going to get developers to be on board with something like this is to be able to have them specify what people can change and what people can't change - and that's something that developers already do (whether they realise it or not) with things like INI files and the Registry.
This is why people using UNIX-based systems campaign for small programs that do one thing and do it well. Being able to combine these small programs into a pipeline that does exactly what you want? Now that's amazing.
I have also patched open-source programs locally in order to get them to do what I want, although sometimes the program is very large and takes a long time to compile, while other times the program is much smaller and can compile more quickly. I had also sometimes modified binaries, or had entirely rewritten a program to get it to do what I wanted. Sometimes, I was able to do it by editing configuration files manually rather than using the UI, or by changing the permissions of files.
Do you think it's possible to make GUI software with a Unix philosophy? Specifically piping together small programs seems natural in a shell but I've struggled to figure out how it could work for GUI apps.
I hope you're looking carefully at COM/OLE Automation, which achieved pretty much all of those things.
- In-process and cross-process language-agnostic API bindings.
- Elaborate support for marshalling objects and complex datastructures across process boundaries.
- Standardized ways to declare application and document object models that can be used by external applications, or internally by application add-ons.
- A standardized distribution system for application extensions, including opportunities for monetization.
- Standardized tools for binding application APIs to web services and database services.
- A default scripting engine (VBA) that can be embedded into applications.
- Admittedly primitive and mostly ill-advised support for dynamically typed objects, and composable objects.
And it provides opportunities for all of the levels of application customization you seem to be looking for.
- Trivial tiny customizations using in-app VBA.
- The ability to extend your application's behavior using addons downloadable from a marketplace WITHOUT trying to capture a percentage of licensing revenue from those who want to monetize their add-ons.
- The ability to write scripts that move data between the published document object models of various applications (and a variety of standard data formats).
- The ability to write fully custom code that lives within applications and interacts with the UI and with live documents within those application (i.e. write-your-own add-ons).
Plus it would be enormously fun to build the equivalent functionality of COM/OLE with the all the benefits of hindsight, and none of the cruft incurred by Visual Basic, with lessons in hand from some of the things COM didn't do well. (svg as a graphics transport, perhaps? A more organized arrangement of threading model options? Support for asynchronous methods? A standardized event mechanism?)
Questions that come to mind:
- What can you get away with not doing that COM does do? Not much, I think.
I used to own COM at Microsoft; I think that MCP is the current re-instantiation of the ideas from COM and that English is now the new scripting language.
There are people who like to tinker, to play with things, take them apart, learn how they work, put them back together again. Some of them go on to make new things. A few of them will make things that change the world. I'd like to live in a world that does more to encourage imagination and creativity, that lets people participate in creating their future. Software doesn't all have to be black boxes with No User Serviceable Parts Inside. We've seen what people can do with things like HyperCard, Visual Basic, Excel. And have fun doing it.
As described in the article, sharing data between apps is currently impossible. I wish Solid/PODs had taken off, but I get the sense that project spent more time on ontologies and less on making useful things.
How can we draw apps into using a common data backend owned by the user?
Almost all apps I use can export their data into common and open formats. If you make your workflow around common file formats instead of specific apps, then you can easier share data between apps.
> we created Patchwork—a web-based collaboration environment for malleable software... storing both user data and software code in Automerge documents. On top of that, it adds version control utilities like history views and simple branching. These tools apply to any document in the system—whether a piece of writing, or code for a software tool... Eventually we also plan to release Patchwork as an open-source tool
What milestones would you like to hit before open-sourcing it? As an outsider, it looks like it has a LOT of features, and I wonder if there's feature creep. Still, version control for everything is a tall order, so perhaps it needs plenty of time to bake.
Actually, Patchwork has surprisingly few features! Think of it more like an OS than a product. The goal is a small set of composable primitives that let you build many things - documents, tools, branching/diffs, plugins…
To answer your question: although we use Patchwork every day, it’s currently very rough around the edges. The SDK for building stuff needs refinement (and SDKs are hard to change later…) Reliability and performance need improvement, in coordination with work on Automerge. We also plan to have more alpha users outside our lab before a broader release, to work through some of these issues.
In short, we feel that it’s promising and headed in a good direction, but it’s not there yet.
In the 'Tools, not Apps' part of the article they reference Michel Beaudouin-Lafon's talk 'A World Without Apps' which goes back in time and shows the Xerox Star operating system:
https://m.youtube.com/watch?v=ntaudUum06E&t=313s
Though clearly imperfect video games have, at times, managed to reflect some of these principles. There was a brief moment at the peak popularity of World of Warcraft when the game was highly moddable -- if you walked up to a stranger's monitor you would not recognize the interface they were using for the game because they'd customized it so much.
The gaming audience is probably the most demanding of any regarding customization, modding, accessibility and other similar principles -- when the market forces line up and they are flush enough to offer more malleability video games frequently do.
Ink & Switch is doing great (some would say overdue) research that’s on the boundary of commercializability but outside the bounds of what the big corporates want to do with computers.
Great to see them pushing work like this, building experiments, and talking about what they’ve learned.
Have you had a chance to look at atproto? I was a bit surprised to see no mentions of it. It powers Bluesky but is not coupled to it — the idea is essentially that your public data is meaningfully owned by you (can move hosting without losing identity) across all applications in a global collection, and different app backends can “derive” aggregated views (like Bluesky’s database) from the public network data of all users.
Yes, I think atproto is a great example of the “shared data” pattern for composable tools! Especially since it handles public social scale, which is not addressed by the other systems we mention.
AFAIK, atproto is primarily designed to support multiple distinct clients over shared data, but I also wonder if it could help with composing more granular views within a client. I previously worked on a browser extension for Twitter, and data scraping was a major challenge - which seems easier building on an open protocol like atproto.
Sorry we didn’t mention — it is on our radar but we ran out of space and had to omit lots of good prior art..
I should also mention btw that Bluesky user-configurable feeds is a perfect example of a gentle slope from user to creator!
I love the optimism, but I'm a pessimist. Even at the first paragraph:
> "The original promise of personal computing was a new kind of clay—a malleable material that users could reshape at will. Instead, we got appliances: built far away, sealed, unchangeable. When your tools don’t work the way you need them to, you submit feedback and hope for the best. You’re forced to adapt your workflow to fit your software, when it should be the other way around."
I already have objections: User and businesses overwhelmingly voted with their wallets that they want appliances. The big evil megacorps didn't convince them of this - Windows was a wildly malleable piece of software in the 90s and 2000s, and it didn't exactly win love for it. The Nintendo Switch sold 152 million units, the malleable Steam Deck hasn't broken 6.
Software that isn't malleable is easier to develop, easier to train for, easier to answer support questions for, and frequently cheaper. Most users find training for what's off-the-shelf already difficult - customizing it is something that only a few percent would even consider, let alone do. Pity the IT Department that then has to answer questions about their customizations when they go wrong - user customizations can easily become their own kind of "shadow IT."
The send off is also not reassuring:
> "When the people living or working in a space gradually evolve their tools to meet their needs, the result is a special kind of quality. While malleable software may lack the design consistency of artifacts crafted behind closed doors in Palo Alto, we find that over time it develops the kind of charm of an old house. It bears witness to past uses and carries traces of its past decisions, even as it evolves to meet the needs of the day."
If you think this is okay, we've already lost. People simply will not go back to clunky software of the 2000s, regardless of the malleability or usability.
You make a fair point! Ease of use matters. We all want premade experiences some of the time. The problem is that even in those (perhaps rare!) cases where we want to tweak something, even a tiny thing, we’re out of luck.
An analogy: we all want to order a pizza sometime. But at the same time, a world with only food courts and no kitchens wouldn’t be ideal. That’s how software feels today—-the “kitchen” is missing.
Also, you may be right in the short term. But in the long run, our tools also shape our culture. If software makes people feel more empowered, I believe that’ll eventually change people’s preferences.
Well, if I may continue my pessimistic outlook, I would simply say that anyone can cook, but not everyone can cook. Programmers are chefs - we take ingredients called SDKs and serve them up into meals called custom software. Anyone who isn't a chef, might need to buy the packaged cake mix at Walmart.
For something as complex as software, it's sad, but it's almost... okay? Every industry has gone through this; there was a time when cars were experimental and hand-assembled. Imagine if Henry Ford in the 1920s had focused on democratizing car parts so anyone can build their own car with thousands of potential combinations; I don't think it would have worked out. It is still true that you can, technically speaking, build your own car; but nobody pretends that we can turn everyone into personalized car builders if we just try hard enough.
I gotta say I don’t understand your point about cooking — billions of people who aren’t professional chefs cook meals every day! These meals may not live up to restaurant standards but they have different virtues — like making it taste just the way you like it, or carrying on a family tradition.
On that note, Robin Sloan has a beautiful post about software as a home cooked meal…
That said, I think talking about cars may be stronger ground for the argument you’re making. Mass production is incredible at making cheap uniform goods. This applies even more in software, where marginal costs are so low.
The point of our essay, though, is that the uniformity of mass produced goods can hinder people when there’s no ability to tweak or customize at all. I’m not a car guy, but it seems like cars have reasonably modular parts you can replace (like the tires) and I believe some people do deeper aftermarket mods as well. In software, too often you can’t even make the tiniest change. It’s as if everyone had to agree on the same tires, and you needed to ask the original manufacturer to change the tires for you!
First thanks for the original article and it is great to know a team is going deep on this.
I am a bit fed up with software less because of malleablity but because of the cloud walled gardens. I can't open my Google doc in something else like I can a pdf in different programs. Not without exporting it.
This for me interested and I found remotestorage.io which looks very promising. I like the idea that I buy my 100gb of could storage from wherever then compose the apps I want to use around it.
I hadn't thought of malleable software... that's a whole other dimension! Thanks for introducing this as a concept worth talking about. Of course I have heard of elisp and used excel but haven't thought of it front and centre.
In terms of cooking ... I feel like cooking is easier potentially as for the most part (some exceptions) if I know the food hygiene and how to cook stuff then it is an additive process. Chicken plus curry plus rice. Software is like this too until it isn't. The excel docs do a great simple budget but not a full accounting suite. With the latter you get bogged down in fixing bugs in the sheet as you try to use it.
I think it is good you are researching as these could be solvable problems probably for many cases.
Something I have always thought about is sometimes it matter less if the software is open source than if the file format is. Then people can extend by building more around the file format. A tool might work on part of the format where an app works on all of it. I use free tools to sign PDFs for example.
Also adding that software only being inflexible due to being mass-produced is the state of the pre-Enshittification era that we already left behind.
Since the last decade or so at the latest, software is often designed as an explicit means of power over users and applications are made deliberately inflexible to, e.g. corece users to watch ads, purchase goods or services or simply stay at the screen for longer than intended.
(Even that was already the case in niches, especially "shareware". But in a sense, all commercial software is shareware now)
> But in the long run, our tools also shape our culture. If software makes people feel more empowered, I believe that’ll eventually change people’s preferences.
I'm really curious to see how the overlap with BABLR plays out. In many ways we're doing the same experiments in parallel: we're both working on systems that have a natural tendency to become their own version control, and which try to say what the data is without prejudice as to how it might be presented.
In particular BABLR thinks it can narrow and close the ease-of-use gap between "wire up blocks" style programming and "write syntax out left to right" style programming by making a programming environment that lets you wire up syntax tree nodes as blocks.
It's still quite rough, but we have a demo that shows off how we can simplify the code editing UX down to the point where you can do it on a phone screen:
Try tapping a syntax node in the example code to select that node. Then you can tap-drag the selected (blue) node and drop it into any gap (gray square). The intent is to ensure that you can construct incomplete structures, but never outright invalid ones.
> That’s how software feels today—-the “kitchen” is missing.
I believe you'll want to read this essay which appeared in the Spring 1990 issue of Market Process, a publication of the Center for the Study of Market Processes at George Mason University ...
"An Inquiry into the Nature and Causes of the Wealth of Kitchens"
by Phil Salin
Having worked for him, I'd say his wikipedia entry doesn't do him justice, but is a good start if you're curious--like your Ink & Switch group he spent many years trying to create a world changing software/platform [AMIX , sister co. to Xanadu, both funded in the 1990s by Autodesk].
Look at HyperCard (more or less dead, regrettably) or Excel and you'll see many useful "applications" created by non-programmers over the years.
People want to create, but need tools to make this easier / more abstract than regular programming. Most companies want to get them into their walled gardens instead, especially web-based companies today.
> Windows was a wildly malleable piece of software in the 90s and 2000s, and it didn't exactly win love for it.
Is that so? I remember the custom styling options in Win98 and ME/2000 still very fondly. And there were lots of people who invested effort in making their own color schemes, meticulously assembling personal toolbars in Office, etc. (The enthusiasm went away the first time you had to reinstall and were faced with the choice of doing it all again or sticking with the defaults. But I'd chalk this up to Windows not treating the customization data as important enough to provide backup/export functionality, not that people didn't want to customize)
The features increasingly went away in later Windows and Office versions, but I assumed it was some corporate decision. Was there ever actual backlash from users against those features?
Tech-oriented people love software malleability and also can handle the responsibility - e.g. understanding something that's broken + customized by you could have been broken by you.
Non tech-oriented people, the masses, absolutely love customizability and malleability--but aren't willing to handle the responsibility. They will reach out to tech support who can't possibly know every customization option of every application and its effects, and complain when they tell them to reset/reinstall.
And in a corporate environment where the company provides the PC, the company would rather not deal with it. Office dominates at the workplace, is mostly making money from corporate users, and users want it to behave the same way it does in the workplace. So any backlash by users is simply not going to matter unless it might cause companies to not renew their licenses.
A company I work for is moving to Office-on-the-web for PCs that are used by people who don't really use Office that much except possibly to read Word docs, in order to save on licensing costs I presume. It's even less customizable than any desktop version. So the trend is going to continue.
You're talking about a world in which costs are centralized. A central entity handles all R&D costs and all customer support costs for one product.
If you split the support costs between many members of a community though, you don't need to fear customization. Then, ideally, the users who are most alike will support each other, the same way you can get a degree of support for some particular flavor of Linux by seeking out other people who use that flavor (or another one that's enough like it)
Backlash will be in the form of working, competing software maintained by communities, precisely because this is the only form of backlash that might cause companies not to renew their licenses.
Well, there's a (modest) learning curve involved in customizing color schemes and of course more complex tasks that are still in the domain of user's options.
Users can be fearful of "messing it up" if they change defaults. Making changes necessarily confers responsibility to follow instructions, learn how to alter settings and know the set of options that are appropriate to change and which are not.
That takes a pretty basic safety mechanism to address, require confirmation after the change. Windows has (had?) that, after 15 or 30 seconds or whatever from a change (like to resolution or something) it reverts back without confirmation. This makes changes of all sorts easy and cheap to perform. The worst case is you idle for 30 seconds waiting for it to go back to a legible form.
I think having the monochrome mode (which might be available at start time, and would also (temporarily) reset the font) would help with this and other problems (e.g. if one colour of the display is defective). This might be used for the UI to confirm the change but also when you start the computer that it can display such a message so that you can use that to recover from this and other problems (including screen resolution, colours, fonts, languages, and many more).
Even if this specific example is flawed, non-technical users can and do end up in similar non-sensical situations that require a call to support to sort out. The more customization that's possible, the more complicated those calls can get. (Think of the support guy that has to figure out that Grandma's Windows Home setup has custom group policy settings that her well-meaning grandson setup to make things simpler for her by hiding this or that, and now she can't follow the tech's instructions that work for 99.9% of users)
Not only that, but they do so enough that the added cost to field those support calls is enough for companies to change their products to reduce their likelihood.
Almost no-one on this forum falls into the category of user I'm describing. And this kind of user is one of the most common for general consumer software. There is a real cost burden to supporting software with configurability.
And when this kind of thing gets messed up, do users go "Oops! My bad!"? No, they go "This software sucks, I'm going to use <competitor> instead where this kind of thing never happens!"
A common failure mode I’ve seen: since Windows 8/8.1 iirc, so-called “Microsoft accounts” are used to login to the OS, as opposed to local user accounts, which were the status quo for personal computers, and are managed locally by the OS on behalf of Administrator users. Many legacy Windows users had and have no idea what the difference is or why it matters, but part of the Microsoft Account setup flow in Windows OOBE involves setting up 2FA for the new Microsoft Account, and I think it will let you use email or SMS, and maybe even a phone call to get the 2FA code. I think you are given the option to complete the 2FA at a later time, in case the code is delayed, but I forget for sure.
I can’t count how many people I helped to regain access to their computer login because of losing access to the method used to receive 2FA codes for Microsoft accounts, which is necessary to login if you have forgotten your password. The Microsoft account user setup won’t let you make a password-free login unless you use a local account, and short easily guessable passwords don’t meet their online account security requirements. Most people probably don’t want a Microsoft account if it has this failure mode, but people don't know the trade offs at the time of user account setup, and Microsoft uses that ignorance as leverage to get people signed into everything so that you will have have opted-in to all of this. It’s such an own-goal by Microsoft and it makes me feel for users who have no idea how any of this works. It’s a hard problem to solve, I’m sure, but it shouldn’t be like this.
The people who are most disadvantaged by the high tech highly secure thrust of modern tech are those who have the least skills with technology. Low skill users are also most at risk for scams and malware and other kinds of tactics, so I don’t mean to say that having no password is good. Having no password is a bad solution to the problem of computers being hard to use for many people, and they don’t know what they don’t know, so anything that they haven’t seen before is a cause for concern or alarm to their mind. Since most people have forgotten that they even have a Microsoft account by the time they have trouble logging in to their computer using one, they click around until they get to the account recovery, and then usually get their account locked because they can’t solve the security challenges that they never faced before or anticipated when doing the initial setup perhaps years prior.
> People simply will not go back to clunky software of the 2000s, regardless of the malleability or usability.
Software in the 2000s was markedly better to software today. But it's cheaper and easier for companies to produce shitty software, so that's what we get. It has nothing to do with consumer preference.
Training and support for applications isn't a thing outside of enterprises, especially for SaaS web apps. You simply cannot reliably get support for Google or Facebook services unless you know some very obscure channels. It is wrong to say this is a trade-off: it is a regression.
In my experience, people yearn for coding and modding. Given the tools a whole bunch of people will do domain-specific miracles using macros and other tools. I'm almost convinced that teaching programming is easier than teaching software development boilerplate.
I really love those customization power charts and really happy to see that my anecdote-based thoughts might actually have some grounding behind them.
I was thinking a lot about software malleability - but from a technical perspective.
I am on the verge of building something useful - only if I could find the time to do it.
Here's my premise - if you use something like a game engine, say Unity, and Unreal, you basically have the ability to modify everything in real time, and heve it reflected inside the editor immediately - you could change textures, models, audio, even shaders (which are a kind of code), and have the editor reload just that tiny resource instantaneously.
But not code code - for some reason computer code must go through a compilation, optimization and linking process, creating a monolithic executable piece of code, that cannot be directly modified. This is even true of dynamic languages like Js/Ts, which support modification on the fundamental level, yet somehow lose this ability when using advanced toolchains.
Which is weird since most compilers/OSes support this dynamism at a fundamental level - the machine interface unit of the C compiler is a function, the replacement unit in most OSes is a dynamic library, a collection of said functions - yet changing this at runtime is almost unheard of and most of the times suicidal.
This is because of a couple problems - memory allocation - replacing parts of a program at runtime can lead to leaks if we don't clean that up, resource allocation - this once again can be solved by tying resource lifetimes to either outside factors, or the lifetime of the function or its containing unit.
A demonstrated analog of this is OS processes, which can be terminated abruptly, their binaries replaced without fear of resource leakage.
The final problem of data corruption can be solved by making such program parts stateless, and making them use a store with atomic transactions.
I have a pretty good idea on how to build such an environment on the low level, whose core idea is having process-like isolation barriers isolating small pieces of programs, and an object database-like datastore that can never be corrupted due to transactional changes (which can be rolled back, enabling stuff like time-travel debugging). Said processes could communicate either via messages/events or sharing parts of their memory.
Such a system would allow you to fearlessly change any part of the source code of a running application at runtime - even if you mess up the code of a said component - say event to a point that it doesn't compile - all that would happen would that single component would cease to function without affecting the rest of the app.
but that's not true: smalltalk, lisp, pike, erlang and some other languages allow you to change code at runtime, only requiring the recompilation of the changed unit of code (depending on the language. in pike it's at the class/object level)
process-like isolation barriers isolating small pieces of programs, and an object database-like datastore that can never be corrupted due to transactional changes (which can be rolled back, enabling stuff like time-travel debugging).
doesn't smalltalk do pretty much that? i'd be really interested in learning how your idea differs. you may also want to look at societyserver/open-Team: https://news.ycombinator.com/item?id=42159045
it's a platform written in pike that implements an object storage, and allows code objects in that to be modified at runtime. transactions are at the object/class level. (if the class fails to compile, the objects are not replaced). it stores versions of classes so a rollback is possible, although not implemented in the interface. (means right now, if i want an older version i have to rollback manually)
Such a system would allow you to fearlessly change any part of the source code of a running application at runtime - even if you mess up the code of a said component - say event to a point that it doesn't compile - all that would happen would that single component would cease to function without affecting the rest of the app.
smalltalk does that, as does societyserver/open-Team, or the roxen webapplication server (also written in pike) and i am pretty sure some lisp and erlang systems do as well.
Tbh, I have never heard of Pike or societyserver before, will check those out!
As for smalltalk, I am also not intimately familiar with the language, but what I have in mind is somewhat lower level, with emphasis on C-like struct layouts stored in a POD way (so raw structs inside arrays and the like).
I'd say a key difference is in my language (working name Dream (because I started the project as my 'dream' language, and picking names is hard)), is that these isolation contexts are explicit, and you pointers can't really cross them.
There are special 'far' pointers that do have the ability to reference external objects in different context, but there's an explicit unwrap operation that needs to happen that can fail, as that object is not guaranteed to be reachable for whatever reason. Processes can be explicitly deleted, meaning all reference operations to them will fail.
To be clear, when i say process, i mean my lightweight internal isolation thing.
So in summary, my langage is procedural inside processes, with in-process garbage-collection, C-like performance and explicit method calls. Between processes, you either have smalltalk-like signals, or you can do Rust-style borrows, where you can access objects inside the process for the duration of a method call.
It has erlang-like 'just let it crash' philosophy, but again is a C-like procedural language (or shall I say Go-like, since it has total memory safety and GC).
It also has familiar C-like syntax, and quite a small(ish) feature set outside of the core stuff.
I have a huge doc written up on it, no idea if it would work and if it did, it would be useful, but I do have some tentative confidence in it.
I have never heard of Pike or societyserver before
pike/roxen had a brief window of growth in the 90s but the leaders at the roxen company (not the devs) missed the opportunity to work with the FOSS community.
pike is fully C-syntax, and it is very performant, so that may be
interesting for you.
societyserver is my fork/continuation of a university project called open-sTeam that stopped development more than a decade ago. i continue to use it and when i am not busy earning money try to work on it, but i haven't yet been able to build a community around it.
the process isolation you talk about sounds like something that erlang promises as well, but i don't know enough about erlang to tell. i'd be curious to learn more though.
open-sTeam/societyserver built an object-level access control system. method calls on others object are being intercepted and only allowed to pass if the caller has the necessary permission to access that object.
it's not process isolation, but also a concept i find interesting
Things like this exist. They're not that useful in practice, because it's like live editing PHP directly on the server, only more so. Most of the value of edits to a piece of code doesn't come from running it once, it comes from having that edit in a durable, managed place. And however much effort you put into your code management database, it's hard to beat the amount of tooling that exists for the worse-is-better "mostly-ascii files on a unix filesystem" model.
I mean, Tcl/Tk has had this since the 90s. Rewrite your procs (functions) on the fly, delete GUI items on the fly, generate new events, create listeners on the fly, etc, etc.
Quite easy to create a GUI that's interactive AND a console you can script on at the same time to inspect / edit / change code.
For example, don't like your window attributes? Write code to destroy it, and re-create it and keep your "live" data unchanged, and it will redisplay in the new style / layout.
And sure, you could code up atomic transactions quite easily.
Itcl even lets you create / add /remove classes or specific class instances on the fly, or redefine class methods.
I concur that TclTk is enormously versatile and productive. I've always regarded
Tcl as a Lisp-like language. Tcl and Lisp share characteristics like homoiconicity that enable benefits you describe.
Tcl isn't as widely known and used as it deserves to be. I think that's in part due to its syntax being sufficiently different from "mainstream" languages. The learning curve isn't particularly steep, but enough so that developers question whether it's worth the effort to go there.
FWIW Tcl 9.0 has recently been released. The language has been enriched with sophisticated object-oriented capabilities, coroutines, full math tower, etc. It's also rather easy to write extensions in C.
Anyway, the GUI toolkit (Tk) has been "borrowed" by many other languages (e.g., Python's tkinter), so quite a few programmers use TclTk, know it not.
Never claimed to be innovative, but sadly all these cool features are nowhere to be found in modern languages. And for some reason, they never appeared in a fast(ish) language, even though I'm sure the JVM is very well equipped to handle this kind of dynamism.
Recompiling a method[1], popping the stack frame, and re-entering the new method is a very, very common debugging pattern on the JVM. I miss it every day that I'm on vastly dumber platforms
1: pedantically, you're recompiling the whole class, but usually it's only one method changing at a time unless things are really going bananas
DCEVM (RIP) allowed swapping the method signature, too, but that is a lot more tricky to use effectively during debugging (e.g. popping the stack frame doesn't magically change the callsite so if you added extra params it's not going to end well) e.g. https://github.com/TravaOpenJDK/trava-jdk-11-dcevm#trava-jdk...
For anyone interested in this: Tsoding (Twitch and YouTube streamer of "recreational programming") demonstrates this in one of his projects, where he hot reloads a dynamic library without interrupting the main program, to test different functionality.
Here's an operating system kernel in Rust that can hot load/unload modules at ELF object boundaries, made safe by trusted compiler allowing only safe Rust:
This is cool and all, but the problem with doing this in C, is if you accidentally do a memory corruption bug while you're just messing about with the code, now you're forced to restart.
Not a problem in a toy app, but in something like a huge program, it can be a PITA to reload everything and get back to where you were.
Yeah, C really does not support this as a language (or implementation) feature - it's something that can be hacked in, with a lot of difficulty, inconvenience, and loss of safety guarantees.
I love napari. I remember downloading it on a whim, and while poking around, I accidentally opened its built-in python console. Half the time, if I'm writing a plugin for it, I open up the console just so that I can play around and print out stuff and try new things.
Everything, even the viewer itself, is accessible from the repl. Nothing hides behind a black box.
I wish that the first semester of programming class deliberately left code out of the material. IMHO students should start with something like this short list:
FileMaker/Microsoft Access/HyperCard (no longer exists)
Macromedia Flash (no longer exists)
Spreadsheets (like Microsoft Excel, unfortunately Airtable isn't there yet?)
Wix (maybe? surely there are better alternatives)
Zapier (or an open source version)
Then move on to what programming could/should be:
htmx
Firebase/RethinkDB (no longer maintained?)
Erlang/Go
GNU Octave/MATLAB
Lisp/Scheme/PostScript/Clojure
Only then, after having full exposure to what computers are capable of and how fast they really are, should students begin studying the antipatterns that have come to dominate tech:
React
Ruby on Rails
Javascript (warts of the modern version with classes and async/await, not the original)
C#/Java/C++/Rust (the dangers of references/pointers and imperative programming)
iOS/Android (Swift vs Objective-C, Kotlin vs Java, ill-conceived APIs, etc)
I realize this last list is contentious, but I could go into the downsides of each paradigm at length. I'm choosing not to.
Since we can't fix the market domination of multibillion companies who don't care about this stuff on any reasonable timescale, maybe we can pull the wool off the children's eyes and give them the tools to tear down the status quo.
I suspect that AI and geopolitical forces may take this decision away from us though. It may already be too late. In that case, we could start with spiritual teachings around philosophy, metaphysics and wisdom to give them the tools needed to work with nonobjective and nondeterministic tech that's indistinguishable from magic.
This kind of list may be right for a trade school. If that's what you're referring to, then I don't disagree. Those students want to learn how to use those tools.
But if the class is computer science at a university, then the students want to go deeper and learn how to improve upon and compete with the existing tools. They need the theory first, which means Lisp (or a derivative) and an imperative language.
In my freshman year of college, I thought I was hot stuff because I knew C++, so I tried to place out of some of the 100 level classes. But the test seemed strange to me, focusing more on abstractions than syntax. I don't remember if I failed, but I don't think I placed out of anything. One of my first classes was on Lisp, specifically Scheme, and it completely blew my mind and forever changed how I look at programming.
Just before I graduated in 1999, they started transitioning to Java, because the web was so popular. But most of us thought that was a mistake. I don't know if they ever switched back to Lisp.
On a funny note, I took that whole class without realizing that Lisp statements could be broken up into separate lines. Or more accurately that each line just declares equivalences that get reduced down to their simplest form by the runtime. So I wrote all of the homework assignments as one giant function of nested parentheses, even for some of the more complex tasks on sorting primitives like lists and trees. I picture the graders shaking their heads in a mix of frustration and awe hahaha.
Rather than raw PostScript, I would suggest METAPOST --- it's a lot more approachable, and with mplib as part of luatex, far more approachable (no need for Ghostscript or distilling to PDF).
I agree with all of the assertions about what software should be.
But... I think a lot of it already is customizable, and users don't want to configure. End-users (or doctors) hate having to learn more about software than they absolutely must. Just an example, Epic (EHR from the essay) definitely has the ability to mark fields as optional/required. Someone just needs to get in and do it, and they don't want to/know how.
The inaccessibility of config to laypeople may actually be where AI shines. You prompt an in-app modal to change X to Y, and it applies the change. A natural language interface to malleability.
This. Making something super customizable is a lot harder to implement (code being too generic, hard to reason about and debug) and often presents a worse UX ("why are there so many options??"). Having the UX design team interview and consider the needs of each user role interacting with the application, and ensuring the app displays/asks only the appropriate info for each user, hiding the rest and adopting smart defaults (instead of requiring everything), is easier to implement, safer and produces more intuitive interfaces than highly customizable ones, in many cases.
Yes, nowadays all the main browsers are pretty much locked down and you have to use the official app stores to sign and distribute your extensions, even if it's just something for your own use. I really wish this would be more open, since extensions allow for so many cool usecases because they don't have all the same restrictions that regular webpages have (CORS for example).
I remember having this idea in undergrad in 2011. My big wish was that every app would ship with a scripting language or an api. The problem is that it’s not at all straight forward to do this. The more complex an app, the more important a facade (like a front end) becomes.
Apple did this back in the mid 1990's (before OSX) with AppleScript. Every application was supposed to ship with metadata that described its object model along with methods that could be invoked on them. AppleScript was an sort of a protocol or interface standard that allowed scripts to automate application actions (and more) without having to use GUI macros. Scripts could be written with a variety of syntaxes. It was pretty cool. However, it turned out that providing an object model and API surface was a pretty heavy lift for application developers and most just half-assed it. And while a fairly robust community developed around AppleScript, it was too small to generate any noticeable uplift in sales for either Apple or independent software vendors. Thus not really commercially viable.
And Microsoft has had OLE -- which is sort of analogous to the object-model portion of AppleScript -- for ages.
My ideas about how to fix this (with a new operating system design) involves:
1. Use of FOSS will be helpful, since it can be improved if something is wrong with it.
2. UI controls are objects with data models like any others are, so even if a API is not provided by a program, these UI controls, and the data associated with them, can be added into scripts like any other API can be.
3. Capabilities are needed for I/O and proxy capabilities can be created and used. Even if the program does not expect the I/O to be filtered or modified, the system forces that it can be done anyways (and the command shell in the system is designed to allow this, too).
4. This metadata is required even for a program to start (due to the way the I/O is working).
The fundamental problem with customizability is that code path complexity is exponential on branching. So it works as long as your app is sufficiently simple, but eventually, the exponents catch up and eat ur software alive.
I think that if the entire computer and operating system are designed better, and the software is designed better, then there are things to be done which would improve the customizability and other things. (I mentioned some of my ideas in some other comments.)
FOSS also helps, but just because it is FOSS does not itself help (and is mentioned in the article), but it is one of the things to be done, too.
UNIX programs with pipes is also one thing that helps, but it is not quite perfectly. Nevertheless, writing programs that do this when working with UNIX systems, is helpful to do. (For working with picture files, I almost entirely use programs that I wrote myself which use farbfeld, and use pipes to combine them; I will then convert to PNG or other formats when writing to disk (I do not use farbfeld as a format to store pictures on disk, but only as the intermediate format to use with pipes).)
Not directly related but I'm sad when I fire up my old Windows phone that I bought for nostalgia and it doesn't work, other than the base OS maybe the old bing.
I get it too, world moved on, people have to manage APIs, updates... but yeah.
There must be a law forcing them to open all specs, such that enthusiasts would be able to use the device. It should be criminal to force the planned obsolescence and create e-waste like this.
I do think that they are right about many things, although I have my own ideas about how to improve them (and I do not agree with all of the ways they (Ink & Switch) are doing with it).
In UNIX systems you can use pipes between programs (if the programs support that; many modern programs don't support it very well), although there are still problems with that too. (I also disagree with the idea that text (especially Unicode text, although the objections apply even without a specific character set) would be the universal format.)
My idea of a computer design and operating system design is intended to do things which will avoid the problems mentioned there (although this does not avoid needing actually good programming, and such things as FOSS etc still have benefits), as well as having other benefits.
Some of the features of my design are: CAQL (Command, Automation, and Query Language), UTLV (Universal Type/Length/Value), and proxy capabilities. (There are more (e.g. multiple locking and transactions), but these will be relevant for this discussion.)
Like OpenDoc and OLE, you can include other kind of things inside of any UTLV file, by the use of the UTLV "Extension" type. The contents of the extension would usually itself be UTLV as well, allowing the parts to be manipulated like others are, although even if the contents isn't UTLV (e.g. for raster images), you would have functions to convert them and to deal with them anyways, so it will still work anyways.
With those things in combination with the accessibility (one of the principles is that accessibility features are for everyone, not only for the people with disabilities; among other things this means that it does not use a separate "accessibility" menu) and m17n and other features, you can also do such things as affect colours, fonts, etc, without much difficulty. (They might not seem related at first, but they are related.)
I had also recently seen https://malleable.systems/mission/ which seems to be related (you might want to read this document even if you are not interested in my own comments). One part says, "If I want to grab a UI control from one application, some processing logic from another, and run it all against a data source from somewhere else again, it should be possible to do so.", and with CAQL and UTLV and proxy capabilities, this can be done easily, because the UI controls are callable objects (which can be used with CAQL) like any other one, the data source can use UTLV (which can be queried and altered by CAQL), and the interaction between them can use proxy capabilities.
The challenges here feel insurmountable, but I can't help but feel there's a certain inevitability to the de-monolithization de-totalization of the domain of computing being ensconced so wholly as it is inside of applications, experiences purely pre-defined by a given app.
Already with AI we are seeing a huge uptick in people's expectations that agents operate across apps. The app is losing its monopoly of power, is losing its primacy as the thing the user touches. See How Alexa Dropped the Ball on Being the Top Conversational System on the Planet which is an article about a lot of factors, many more Conway's Law & corporate fiefdom oriented, but which touches repeatedly in the need to thread experiences across applications, across domains, and where the historal "there's an app for that" paradigm is giving way, is insufficient. https://www.mihaileric.com/posts/how-alexa-dropped-the-ball-...https://news.ycombinator.com/item?id=40659281
AI again is an interesting change agent in other ways. As well as scripting & MCP'ing existing tools/apps, the ability to rapidly craft experiences is changing so quickly. Home-Cooked Software and Barefoot Developers talks so directly to how this could enable vastly more people to be crafting their own experiences. I expect that over time frameworks/libraries themselves adapt, that the matter of computing shifts from targeting expert developer communities who use extensive community knowledge to do their craft, to forms that are deliberately de-esotericized, crafted in even more explicit compositional manners that are more directly malleable, because ai will be better at building systems with more overt declarative pieces. https://maggieappleton.com/home-cooked-software/https://news.ycombinator.com/item?id=40633029
Right now the change is symbolic more than practical, but I also loved seeing Apple's new Liquid Glass design system yesterday, in part because it so clearly advances what Material set out to do: constructs software of multiple different layers, with the content itself being the primary app surface. And in Liquid Glass's case extending that app surface even further, making it practically full screen always, with tools and UI merely refractive layers above the content. This de-emphasizes the compute, makes makes the content the main thing, by removing the boxes and frames of encirclement that once defined the app's space, giving way to pure content, making the buttons mere layers floating above, portals of function floating above, the content below. In practice it's not substantially different than what came before, yet, but it feels like the tools are more incidental, a happenstance layer of options above the content, and is suggestive to me that the tools could change or swap. https://www.apple.com/newsroom/2025/06/apple-introduces-a-de...https://news.ycombinator.com/item?id=44226612
There's such a long arch here. And there's so many reasons why companies love and enjoy having total power over their domain, why they want to be the sole arbiter of experience, with no one else having any say. We've seen collapses of interesting bold intertwingular era API-hype hopeful projects, like Spotify desktop shutting down the amazing incredible JavaScript Apps SDK so long ago (2011-2014). https://techcrunch.com/2014/11/13/rip-spotify-apps-rip-sound...
Folks love to say that this is what the market wants, that there is convenience and freedom in not having any choices, in not having to compose tools, in everything being provided whole and unchanging. I'd love to test that thesis, but I don't think we have evidence now: 99.999%+ of software is built in the totalistic form, tablets carved and passed down to mankind for us to use as directed (or risk anti-circumvention felony charges!). We haven't really been running the experiments to see what would be good for the world, what would make us a better happier more successful world. Whose going to foot the bill, whose going to abandon control over their users?
And it's not something you can do alone. The really malleable software revolution requires not individual changes, individual apps adding plugins or scripting. The real malleable software shift is when the whole experience is built to be malleable. The general systems research for operating systems to host not just applications, but to host views and tools and data flow, history event sourcing and transactions (perhaps). No one piece of software can ever adequately be malleable software on its own: real malleable software requires malleable paradigms of computing, upon which experiences, objects, tools compose.
It all sounds so far off and far fetched. But where we are now is a computing trap, one posited around a philosophy of singularness and unconnectedness delivered down to us users/consumers (a power relationship few want to change!). The limitations of the desktop application model, as it's related and been morphosed into mobile apps, into watch apps, feels like an ever more cumbersome limit, a gate on what is possible. I feel the dual strongly: I'm with those pessimists saying the malleable software world is impossible, that we can never make the shift, I cannot see how it ever could become, and yet I don't think we can stay here forever, I think the limitations are too great, and the opportunity for a better opener computing to awaken is too interesting and too powerful for that possibility to lie slumbering forever. I want to believe the future is exciting, in good ways, in re-opening ways, and although I can hardly see who would fund better or why, and although the challenge is enormous, the project or rebuilding mankind's agency within the technological society feels obligatory necessary & inevitable, and my soul soars at the prospect. Malleable software: thus we all voyage towards computing.
I agree, I feel like the authors are underestimating the effect the new AI is already having on the concept of local software crafting. For my entire lifetime, I've had friends ask me to help them build software that accesses some data somewhere, and I've always had to turn them down because there are too many unknowns.
I've spent countless hours thinking about how to build a business that would solve some class of problems my friends have encountered and I've almost always had to conclude that the business would probably not be profitable, so their ideas were never tested.
Now, with a 2025 chatbot, I can confidently estimate the feasibility of a basic project in minutes and we can build the thing together in hours. No one needs to make a profit, build a new business, or commit to ongoing maintenance. Locally crafted software is taking off dramatically and I think it will become the new normal.
> I agree, I feel like the authors are underestimating the effect the new AI is already having on the concept of local software crafting
Coauthor here -- did you catch our section on AI? [1]
We emphatically agree with you that AI is already enabling new kinds of local software crafting. That's one reason we are excited about doing this work now!
At the same time, AI code generation doesn't solve the structural problems -- our whole software world was built assuming people can't code! We think things will really take off once we reorient the OS around personal tools, not prefabricated apps. That's what the rest of the essay is about.
Yes, but I think we have a somewhat different idea about the market forces. My impression from your essay is that you believe app developers will add APIs that enable personal tools, and only then will local software crafting take off.
My belief is that it is happening already: local software crafting is happening now, before the tools are ready. People aren't going to wait for good APIs to exist; people will MacGyver things together. They'll scrape screens (sometimes with OCR), run emulated devices in the cloud, and call APIs incorrectly and abusively until they get what they need. They won't ask for permission.
A lot of software developers may transition from building to cleaning up knots.
A tool which looks at this sort of thing which was mentioned here recently:
https://news.ycombinator.com/item?id=44118159
but which didn't seem to get much traction is:
https://pontus.granstrom.me/scrappy/
but it pretty much only works for JavaScript programmers and their friends (or folks interested in learning JavaScript).
Other tools which I'd like to put forward as meriting discussion in this context include:
- LyX --- making new layout files allows a user to create a customized tool for pretty much any sort of document they might wish to work on --- a front-end for LaTeX
- pyspread --- every cell being either a Python program or the output of a program, and the possibility of cells being an image allows one to do pretty much anything without the overhead of making or reading a file
- Ipe https://ipe.otfried.org/ --- an extensible drawing program, this really needs a simpler mechanism for that and I'd love to see a tool in the vector drawing space which addressed that --- perhaps the nascent https://graphite.rs/ ?
Very pleased to see LyX and Ipe here. They've been invaluable throughout my academic career, and are just a pleasure to use (once you get the hang of them).
The Qt/KDE world has (imho) some of the best quality software I've used, and is, astonishingly, relatively unpopular compared to FOSS competitors.
Ipe now has a web interface (through the magic of Qt) and I remember there was a plan to make one for LyX, though if it ever happened, I couldn't find it.
Neat —- Scrappy looks like a lovely prototype! As the creators say in their writeup, it fits nicely into the lineage of HyperCard-style “media with optional scripting” editors, which provide a gentle slope into programming.
In the section on dynamic documents towards the end of our essay, we show several of our lab’s own takes on this category of tool, including an example of integrating AI as an optional layer over a live programmable document.
Yeah, I just wish it had a Bézier curve object....
I need an interactive tool for programming such (or I need to buckle down and implement the METAFONT algorithm in my current project).
Tove2d editor demo?
Interesting, trying to research and look into that now --- good starting link and suggestion for approach?
First two links right now both go to Scrappy, but your text makes it sound like you are contrasting different things?
I meant to note that Scrappy was discussed at the first link and then to provide the actual link --- my apologies if that wasn't clear.
Viberunner is another stab at this idea: https://news.ycombinator.com/item?id=44236729
> Mass-produced software is too rigid
Yes, absolutely, even trivial things like colors can rarely be changed, let alone more involved UI parts.
> Inflexible electronic medical records systems are driving doctors to burnout. > When different users have different needs, a centralized development team can’t possibly address everyone’s problems.
That's not the main issue, which is that they don't address *anyone's* problems well since actual users have very little power here, and the devs are far removed from actual user experience. Like that examples of filling useless fields - that serves no one!
> when a developer does try to cram too many solutions into a single product, the result is a bloated mess.
Unless it's organized well? There is no inherent reason many equals mess or even bloat (eg, if solutions are modules you can ignore/not even install, your app with only the solutions you care about has no bloat)
But in general, very laudable goals, would be very empowering for many users to live in a dream world where software is built based on such principles...
I appreciate the idea behind the post, because certainly, we need more hackable apps now that everything is becoming a SaaS that effectively cannot be archived or hacked on (unlike, say, WinAmp or major releases of Windows and their respective fan updates, or for a more common example game mods).
Unfortunately I think that while there’s a decent number of power users and people who have the aptitude to become power users who will make use of software made to be deeply customizable, they are outstripped many times over by people who don’t see software that way and have no interest in learning about it. People are quick to point fingers about why the situation is as it is, but the truth is that it was always going to be this way once computers became widely adopted. It’s no different from how most people who drive cars can’t work on them and why few feel comfortable making modifications to their houses/apartments. There’s just a hard limit to the scope and depth of the average individual's attention, and more often than not technical specialization doesn’t make the cut. No amount of gentle ramping will work around this.
That doesn’t mean we shouldn’t build flexible software… by all means, please do, but I wouldn’t expect it to unseat the Microsofts and Googles of the world any time soon. I do however think that technically capable people should do anything they can to further the development of not just flexible, but local-first, hackable software. Anything that’s hard-tethered to a server should be out of the running entirely and something you can keep running on your machine regardless of the fate of its developer should take priority over more ephemeral options.
Pretty much everyone makes modifications to their homes— arranging furniture, choosing decorations, storing tools and implements and books and...
I've been to hotel rooms that looked identical to each other. I've never been to anybody's long-term home that wasn't unique—and unique in obvious, personalized ways. Even the most regularized housing ends up unique: I've visited everything from US dorm rooms to ex-Soviet housing blocks to cookie-cutter HOA-invested suburbs and yet, rules and norms aside, folks' private spaces were always unique, adapted through both conscious action and by unconscious day-to-day habits.
Just because 90% of these modifications did not need more DIY tools than the occasional hammer and nail does not mean they don't "count". That just shows that reducing friction, risk and skill requirements matters.
Gentle ramping helps in two ways. For people who would be inclined to get into more "advanced" modifications, it lowers the activation energy needed and makes it easier to learn the necessary skills. But even for people who would not be inclined to go "all the way", it still helps them make more involved modifications than they would otherwise. A system with natural affordances to adaptation lets people make the changes they want with less thought and attention than they would otherwise need—the design of the system itself takes on some of the cognitive load for them.
With physical objects like home furniture, the affordances stem from the physical nature of the item and the environment. With software, the affordances—or lack thereof—stem entirely from the software's design.
Mainstream software systems are clearly not designed to be adaptable, but we should not take this as a signal about human nature. Large, quasi-monopolistic companies are driven by scalability, legibility and control far more than user empowerment or adaptability. And most people get stuck with these systems less because they prefer the design and more because there are structural and legal obstacles to switching. The obstacles are surmountable—you can absolutely use a customizing Linux desktop day-to-day, I do!—but they add real friction. And, as we repeatedly see through both research and observation, friction makes a big difference to most people. Friction has an outsize impact not because of people's immutable preferences but, as you said, because people have finite pools of time and attention with too many demands to do everything.
I am making an offline-first thing that serialised to a single file that can be opened without a local Fileserver.
Still working on the UX a little but it seems close to what you want (and I agree). The vision statement is about creating immortal software exactly to fight bitrot https://github.com/tomlarkworthy/lopecode
Well with cars I think many would appreciate if they too were more malleable. My Dad has often told me of a car he once had that was really easy to repair (edit: it was a VW Beetle) as he was not known as someone who was terribly handy. Doubtful anyone would have that experience with today's cars.
> It’s no different from how most people who drive cars can’t work on them
What if instead of cars and driving, we use reading and writing as the metaphor for the kind of media/utility computing can have. I'd argue it them changes the whole nature of the argument.
Yeah, I'm gonna have to disagree with that. Computer users are shaped by the tools that they have available to them. There is always going to be a varying of degrees on how much a user gets in customization, but when the environment is designed around customization, users end up using the tools that are given to them.
I’m not disagreeing with you, but rather suggesting that the ceiling for how much the average user can/will leverage customization is surprisingly low.
If we’re looking for levers to pull to help more people become advanced computer users, I believe progressive disclosure combined with design that takes advantage of natural human inclinations (association, spatial memory, etc) are much more powerful. Some of the most effective power users I’ve come across weren’t “tech people” but instead those who’d used iMac for 5-10 years doing photography or audio editing or whatever and had picked up all of the little productivity boosters scattered around the system ready for the user to discover at just the right time.
With that in mind, I think the biggest contributor to reduced computer literacy is actually the direction software design has taken in the past 10-15 years, where proper UI designers have been replaced with anybody who can cobble a mockup together in photoshop, resulting in vast amount of research being thrown out in favor of dribbble trends and vibes. The result is UI that isn’t humanist, doesn’t care to help the user grow, and is made only with looking pretty in slideshows and marketing copy in mind.
> I’m not disagreeing with you, but rather suggesting that the ceiling for how much the average user can/will leverage customization is surprisingly low.
The average person is also a crappy writer, bad musician and lousy carpenter. But a notepad and a pen don’t tell me how to use them. They don’t limit my creative capacity. Same story with a piano, or a hammer and chisel. I wish computers were more like that.
Your point stands. Most notebook users never use it to write a bestselling novel, or draw like Picasso. But the invitation to try is still in the medium somehow. Just waiting for the right hand.
I agree with the rest of your comment. As software engineers, we could build any software we want for ourselves. It’s telling that we choose to use tools like git and IntelliJ. Stuff that takes months or years to master. I think it’s weirdly perverted to imagine the best software for everyone else is maximally dumbed down. Thats not what users want.
Rather than aiming for “software that is easy to use” I think we should be aiming for “software that rewards you for learning”. At least, in creative work. I’m personally far more interested in making the software equivalent of piano than I am in making the software equivalent of a television set.
Give me Delphi. No, seriously, give me Delphi, but for the web and in a modern popular programming language. Python would be great, but I will not turn my nose away from TypeScript or Go or Lua.
For those who don’t know, Delphi was (is?) a visual constructor for Windows apps that you outfitted with a dialect of Pascal. It was effing magic!
Nowadays the web ecosystem is so fast-paced and so fragmented, the choice is paralyzing, confidence is low. The amount of scaffolding I have to do is insane. There are tools, yes, cookie cutters, npx’s, CRAs, copilots and Cursors that will confidently spew tons of code but quickly leave you alone with this mess.
I haven’t found a solution yet.
Does the recently updated (and aptly named) Lazarus not suit?
https://news.ycombinator.com/item?id=43913414
A quick search yielded:
https://wiki.freepascal.org/Developing_Web_Apps_with_Pascal
and
https://www.reddit.com/r/pascal/comments/es8wlh/free_pascal_...
I've been playing around with the Godot game engine as a sort of modern successor to Delphi / Lazarus. I'm currently messing around with trying to create some database server management software using it.
GDScript is pretty similar in feel to Python, and you can also use C# if you want to. It has some level of GUI controls in the framework (not sure how many yet, but all of the GUI controls used to build the editor are available for use).
I want to believe the 3d capabilities might be useful for some kind of UI stuff, but I don't really have a real idea how to make that work - just a "wouldn't it be neat if..." question about it right now.
I am currently using a stack which consists of:
- Makerkit Next.js/Supabase Starter Kit
- Python backend processing
- BMAD framework for building specification
- Claude Code with Max subscription
- Cursor for in IDE adjustments
I have managed to make some pretty incredible tools, it definitely feels like magic.
I would say I split my time 70% using BMAD as an assistant to build out my scope and clarify what I am trying to do in my own head, then 30% supervising Claude Code.
I have also managed to build out more simple tools using Streamlit to great effect
Your best bet might be using blazor with a rad development tool, though I haven't tried it. blazor notably has devexpress components, which is what makes delphi tolerable.
https://anvil.works/ uses Python
[dead]
I've reverse-engineered a couple of programs before in order to get them to do what I want. Things like setting default options in a program that doesn't let you change the defaults, or getting older Windows programs to function correctly.
I've also patched open-source programs locally in order to get them to do what I want but wouldn't be suitable for upstreaming. For example, I've reverted the order of buttons in a "do you want to save?" close dialog when they changed in an update.
Minor stuff, but just being able to do this is amazing. The trouble is, developers - at least those of closed-source programs - don't want you to be able to do that, partially due to a lot of them relying on security by obscurity in order to earn money.
As such, it feels like the only way you're going to get developers to be on board with something like this is to be able to have them specify what people can change and what people can't change - and that's something that developers already do (whether they realise it or not) with things like INI files and the Registry.
This is why people using UNIX-based systems campaign for small programs that do one thing and do it well. Being able to combine these small programs into a pipeline that does exactly what you want? Now that's amazing.
I have also patched open-source programs locally in order to get them to do what I want, although sometimes the program is very large and takes a long time to compile, while other times the program is much smaller and can compile more quickly. I had also sometimes modified binaries, or had entirely rewritten a program to get it to do what I wanted. Sometimes, I was able to do it by editing configuration files manually rather than using the UI, or by changing the permissions of files.
Do you think it's possible to make GUI software with a Unix philosophy? Specifically piping together small programs seems natural in a shell but I've struggled to figure out how it could work for GUI apps.
Copy ComfyUI's paradigm of connected components.
I hope you're looking carefully at COM/OLE Automation, which achieved pretty much all of those things.
- In-process and cross-process language-agnostic API bindings.
- Elaborate support for marshalling objects and complex datastructures across process boundaries.
- Standardized ways to declare application and document object models that can be used by external applications, or internally by application add-ons.
- A standardized distribution system for application extensions, including opportunities for monetization.
- Standardized tools for binding application APIs to web services and database services.
- A default scripting engine (VBA) that can be embedded into applications.
- Admittedly primitive and mostly ill-advised support for dynamically typed objects, and composable objects.
And it provides opportunities for all of the levels of application customization you seem to be looking for.
- Trivial tiny customizations using in-app VBA.
- The ability to extend your application's behavior using addons downloadable from a marketplace WITHOUT trying to capture a percentage of licensing revenue from those who want to monetize their add-ons.
- The ability to write scripts that move data between the published document object models of various applications (and a variety of standard data formats).
- The ability to write fully custom code that lives within applications and interacts with the UI and with live documents within those application (i.e. write-your-own add-ons).
Plus it would be enormously fun to build the equivalent functionality of COM/OLE with the all the benefits of hindsight, and none of the cruft incurred by Visual Basic, with lessons in hand from some of the things COM didn't do well. (svg as a graphics transport, perhaps? A more organized arrangement of threading model options? Support for asynchronous methods? A standardized event mechanism?)
Questions that come to mind:
- What can you get away with not doing that COM does do? Not much, I think.
- How could you make it better? A bunch of ways!
I used to own COM at Microsoft; I think that MCP is the current re-instantiation of the ideas from COM and that English is now the new scripting language.
There are people who like to tinker, to play with things, take them apart, learn how they work, put them back together again. Some of them go on to make new things. A few of them will make things that change the world. I'd like to live in a world that does more to encourage imagination and creativity, that lets people participate in creating their future. Software doesn't all have to be black boxes with No User Serviceable Parts Inside. We've seen what people can do with things like HyperCard, Visual Basic, Excel. And have fun doing it.
As described in the article, sharing data between apps is currently impossible. I wish Solid/PODs had taken off, but I get the sense that project spent more time on ontologies and less on making useful things.
How can we draw apps into using a common data backend owned by the user?
Almost all apps I use can export their data into common and open formats. If you make your workflow around common file formats instead of specific apps, then you can easier share data between apps.
> we created Patchwork—a web-based collaboration environment for malleable software... storing both user data and software code in Automerge documents. On top of that, it adds version control utilities like history views and simple branching. These tools apply to any document in the system—whether a piece of writing, or code for a software tool... Eventually we also plan to release Patchwork as an open-source tool
What milestones would you like to hit before open-sourcing it? As an outsider, it looks like it has a LOT of features, and I wonder if there's feature creep. Still, version control for everything is a tall order, so perhaps it needs plenty of time to bake.
Actually, Patchwork has surprisingly few features! Think of it more like an OS than a product. The goal is a small set of composable primitives that let you build many things - documents, tools, branching/diffs, plugins…
To answer your question: although we use Patchwork every day, it’s currently very rough around the edges. The SDK for building stuff needs refinement (and SDKs are hard to change later…) Reliability and performance need improvement, in coordination with work on Automerge. We also plan to have more alpha users outside our lab before a broader release, to work through some of these issues.
In short, we feel that it’s promising and headed in a good direction, but it’s not there yet.
In the 'Tools, not Apps' part of the article they reference Michel Beaudouin-Lafon's talk 'A World Without Apps' which goes back in time and shows the Xerox Star operating system: https://m.youtube.com/watch?v=ntaudUum06E&t=313s
Another reference I usually bring up is Alan Kay's talk on smalltalk: https://www.youtube.com/watch?v=AnrlSqtpOkw&t=4m19s
My related comments on this, just to show other stories along this theme:
- https://news.ycombinator.com/item?id=36885940
- https://news.ycombinator.com/item?id=36594543
- https://news.ycombinator.com/item?id=18254214
Though clearly imperfect video games have, at times, managed to reflect some of these principles. There was a brief moment at the peak popularity of World of Warcraft when the game was highly moddable -- if you walked up to a stranger's monitor you would not recognize the interface they were using for the game because they'd customized it so much.
The gaming audience is probably the most demanding of any regarding customization, modding, accessibility and other similar principles -- when the market forces line up and they are flush enough to offer more malleability video games frequently do.
Ink & Switch is doing great (some would say overdue) research that’s on the boundary of commercializability but outside the bounds of what the big corporates want to do with computers.
Great to see them pushing work like this, building experiments, and talking about what they’ve learned.
Have you had a chance to look at atproto? I was a bit surprised to see no mentions of it. It powers Bluesky but is not coupled to it — the idea is essentially that your public data is meaningfully owned by you (can move hosting without losing identity) across all applications in a global collection, and different app backends can “derive” aggregated views (like Bluesky’s database) from the public network data of all users.
Yes, I think atproto is a great example of the “shared data” pattern for composable tools! Especially since it handles public social scale, which is not addressed by the other systems we mention.
AFAIK, atproto is primarily designed to support multiple distinct clients over shared data, but I also wonder if it could help with composing more granular views within a client. I previously worked on a browser extension for Twitter, and data scraping was a major challenge - which seems easier building on an open protocol like atproto.
Sorry we didn’t mention — it is on our radar but we ran out of space and had to omit lots of good prior art..
I should also mention btw that Bluesky user-configurable feeds is a perfect example of a gentle slope from user to creator!
Isn't it just one more incarnation of long-existing patterns - the same way we keep reinventing Usenet and IRC?
I love the optimism, but I'm a pessimist. Even at the first paragraph:
> "The original promise of personal computing was a new kind of clay—a malleable material that users could reshape at will. Instead, we got appliances: built far away, sealed, unchangeable. When your tools don’t work the way you need them to, you submit feedback and hope for the best. You’re forced to adapt your workflow to fit your software, when it should be the other way around."
I already have objections: User and businesses overwhelmingly voted with their wallets that they want appliances. The big evil megacorps didn't convince them of this - Windows was a wildly malleable piece of software in the 90s and 2000s, and it didn't exactly win love for it. The Nintendo Switch sold 152 million units, the malleable Steam Deck hasn't broken 6.
Software that isn't malleable is easier to develop, easier to train for, easier to answer support questions for, and frequently cheaper. Most users find training for what's off-the-shelf already difficult - customizing it is something that only a few percent would even consider, let alone do. Pity the IT Department that then has to answer questions about their customizations when they go wrong - user customizations can easily become their own kind of "shadow IT."
The send off is also not reassuring:
> "When the people living or working in a space gradually evolve their tools to meet their needs, the result is a special kind of quality. While malleable software may lack the design consistency of artifacts crafted behind closed doors in Palo Alto, we find that over time it develops the kind of charm of an old house. It bears witness to past uses and carries traces of its past decisions, even as it evolves to meet the needs of the day."
If you think this is okay, we've already lost. People simply will not go back to clunky software of the 2000s, regardless of the malleability or usability.
Coauthor here.
You make a fair point! Ease of use matters. We all want premade experiences some of the time. The problem is that even in those (perhaps rare!) cases where we want to tweak something, even a tiny thing, we’re out of luck.
An analogy: we all want to order a pizza sometime. But at the same time, a world with only food courts and no kitchens wouldn’t be ideal. That’s how software feels today—-the “kitchen” is missing.
Also, you may be right in the short term. But in the long run, our tools also shape our culture. If software makes people feel more empowered, I believe that’ll eventually change people’s preferences.
Well, if I may continue my pessimistic outlook, I would simply say that anyone can cook, but not everyone can cook. Programmers are chefs - we take ingredients called SDKs and serve them up into meals called custom software. Anyone who isn't a chef, might need to buy the packaged cake mix at Walmart.
For something as complex as software, it's sad, but it's almost... okay? Every industry has gone through this; there was a time when cars were experimental and hand-assembled. Imagine if Henry Ford in the 1920s had focused on democratizing car parts so anyone can build their own car with thousands of potential combinations; I don't think it would have worked out. It is still true that you can, technically speaking, build your own car; but nobody pretends that we can turn everyone into personalized car builders if we just try hard enough.
I gotta say I don’t understand your point about cooking — billions of people who aren’t professional chefs cook meals every day! These meals may not live up to restaurant standards but they have different virtues — like making it taste just the way you like it, or carrying on a family tradition.
On that note, Robin Sloan has a beautiful post about software as a home cooked meal…
https://www.robinsloan.com/notes/home-cooked-app/
That said, I think talking about cars may be stronger ground for the argument you’re making. Mass production is incredible at making cheap uniform goods. This applies even more in software, where marginal costs are so low.
The point of our essay, though, is that the uniformity of mass produced goods can hinder people when there’s no ability to tweak or customize at all. I’m not a car guy, but it seems like cars have reasonably modular parts you can replace (like the tires) and I believe some people do deeper aftermarket mods as well. In software, too often you can’t even make the tiniest change. It’s as if everyone had to agree on the same tires, and you needed to ask the original manufacturer to change the tires for you!
First thanks for the original article and it is great to know a team is going deep on this.
I am a bit fed up with software less because of malleablity but because of the cloud walled gardens. I can't open my Google doc in something else like I can a pdf in different programs. Not without exporting it.
This for me interested and I found remotestorage.io which looks very promising. I like the idea that I buy my 100gb of could storage from wherever then compose the apps I want to use around it.
I hadn't thought of malleable software... that's a whole other dimension! Thanks for introducing this as a concept worth talking about. Of course I have heard of elisp and used excel but haven't thought of it front and centre.
In terms of cooking ... I feel like cooking is easier potentially as for the most part (some exceptions) if I know the food hygiene and how to cook stuff then it is an additive process. Chicken plus curry plus rice. Software is like this too until it isn't. The excel docs do a great simple budget but not a full accounting suite. With the latter you get bogged down in fixing bugs in the sheet as you try to use it.
I think it is good you are researching as these could be solvable problems probably for many cases.
Something I have always thought about is sometimes it matter less if the software is open source than if the file format is. Then people can extend by building more around the file format. A tool might work on part of the format where an app works on all of it. I use free tools to sign PDFs for example.
Also adding that software only being inflexible due to being mass-produced is the state of the pre-Enshittification era that we already left behind.
Since the last decade or so at the latest, software is often designed as an explicit means of power over users and applications are made deliberately inflexible to, e.g. corece users to watch ads, purchase goods or services or simply stay at the screen for longer than intended.
(Even that was already the case in niches, especially "shareware". But in a sense, all commercial software is shareware now)
> But in the long run, our tools also shape our culture. If software makes people feel more empowered, I believe that’ll eventually change people’s preferences.
I'm really curious to see how the overlap with BABLR plays out. In many ways we're doing the same experiments in parallel: we're both working on systems that have a natural tendency to become their own version control, and which try to say what the data is without prejudice as to how it might be presented.
In particular BABLR thinks it can narrow and close the ease-of-use gap between "wire up blocks" style programming and "write syntax out left to right" style programming by making a programming environment that lets you wire up syntax tree nodes as blocks.
It's still quite rough, but we have a demo that shows off how we can simplify the code editing UX down to the point where you can do it on a phone screen:
https://paned.it/
Try tapping a syntax node in the example code to select that node. Then you can tap-drag the selected (blue) node and drop it into any gap (gray square). The intent is to ensure that you can construct incomplete structures, but never outright invalid ones.
> Coauthor here.
> That’s how software feels today—-the “kitchen” is missing.
I believe you'll want to read this essay which appeared in the Spring 1990 issue of Market Process, a publication of the Center for the Study of Market Processes at George Mason University ...
"An Inquiry into the Nature and Causes of the Wealth of Kitchens" by Phil Salin
Having worked for him, I'd say his wikipedia entry doesn't do him justice, but is a good start if you're curious--like your Ink & Switch group he spent many years trying to create a world changing software/platform [AMIX , sister co. to Xanadu, both funded in the 1990s by Autodesk].
http://www.philsalin.com/kitchens/index.html#:~:text=An%20In...
Look at HyperCard (more or less dead, regrettably) or Excel and you'll see many useful "applications" created by non-programmers over the years.
People want to create, but need tools to make this easier / more abstract than regular programming. Most companies want to get them into their walled gardens instead, especially web-based companies today.
you should take a look at TFA; both of those are mentioned in great detail! it's a good read
> Windows was a wildly malleable piece of software in the 90s and 2000s, and it didn't exactly win love for it.
Is that so? I remember the custom styling options in Win98 and ME/2000 still very fondly. And there were lots of people who invested effort in making their own color schemes, meticulously assembling personal toolbars in Office, etc. (The enthusiasm went away the first time you had to reinstall and were faced with the choice of doing it all again or sticking with the defaults. But I'd chalk this up to Windows not treating the customization data as important enough to provide backup/export functionality, not that people didn't want to customize)
The features increasingly went away in later Windows and Office versions, but I assumed it was some corporate decision. Was there ever actual backlash from users against those features?
Tech-oriented people love software malleability and also can handle the responsibility - e.g. understanding something that's broken + customized by you could have been broken by you.
Non tech-oriented people, the masses, absolutely love customizability and malleability--but aren't willing to handle the responsibility. They will reach out to tech support who can't possibly know every customization option of every application and its effects, and complain when they tell them to reset/reinstall.
And in a corporate environment where the company provides the PC, the company would rather not deal with it. Office dominates at the workplace, is mostly making money from corporate users, and users want it to behave the same way it does in the workplace. So any backlash by users is simply not going to matter unless it might cause companies to not renew their licenses.
A company I work for is moving to Office-on-the-web for PCs that are used by people who don't really use Office that much except possibly to read Word docs, in order to save on licensing costs I presume. It's even less customizable than any desktop version. So the trend is going to continue.
You're talking about a world in which costs are centralized. A central entity handles all R&D costs and all customer support costs for one product.
If you split the support costs between many members of a community though, you don't need to fear customization. Then, ideally, the users who are most alike will support each other, the same way you can get a degree of support for some particular flavor of Linux by seeking out other people who use that flavor (or another one that's enough like it)
Backlash will be in the form of working, competing software maintained by communities, precisely because this is the only form of backlash that might cause companies not to renew their licenses.
What is the "responsibility" of customizing the color scheme of your own PC?
Well, there's a (modest) learning curve involved in customizing color schemes and of course more complex tasks that are still in the domain of user's options.
Users can be fearful of "messing it up" if they change defaults. Making changes necessarily confers responsibility to follow instructions, learn how to alter settings and know the set of options that are appropriate to change and which are not.
Not setting the text color the same as the background color and making everything unreadable, including the UI to change the color back?
That takes a pretty basic safety mechanism to address, require confirmation after the change. Windows has (had?) that, after 15 or 30 seconds or whatever from a change (like to resolution or something) it reverts back without confirmation. This makes changes of all sorts easy and cheap to perform. The worst case is you idle for 30 seconds waiting for it to go back to a legible form.
I think having the monochrome mode (which might be available at start time, and would also (temporarily) reset the font) would help with this and other problems (e.g. if one colour of the display is defective). This might be used for the UI to confirm the change but also when you start the computer that it can display such a message so that you can use that to recover from this and other problems (including screen resolution, colours, fonts, languages, and many more).
And when they click through the confirmation without reading it like the vast majority of users?
If you can't see it because you borked up the colors badly enough, why would you be clicking on it?
Didn't say the buttons were invisible, just text.
Even if this specific example is flawed, non-technical users can and do end up in similar non-sensical situations that require a call to support to sort out. The more customization that's possible, the more complicated those calls can get. (Think of the support guy that has to figure out that Grandma's Windows Home setup has custom group policy settings that her well-meaning grandson setup to make things simpler for her by hiding this or that, and now she can't follow the tech's instructions that work for 99.9% of users)
Not only that, but they do so enough that the added cost to field those support calls is enough for companies to change their products to reduce their likelihood.
Almost no-one on this forum falls into the category of user I'm describing. And this kind of user is one of the most common for general consumer software. There is a real cost burden to supporting software with configurability.
And when this kind of thing gets messed up, do users go "Oops! My bad!"? No, they go "This software sucks, I'm going to use <competitor> instead where this kind of thing never happens!"
A common failure mode I’ve seen: since Windows 8/8.1 iirc, so-called “Microsoft accounts” are used to login to the OS, as opposed to local user accounts, which were the status quo for personal computers, and are managed locally by the OS on behalf of Administrator users. Many legacy Windows users had and have no idea what the difference is or why it matters, but part of the Microsoft Account setup flow in Windows OOBE involves setting up 2FA for the new Microsoft Account, and I think it will let you use email or SMS, and maybe even a phone call to get the 2FA code. I think you are given the option to complete the 2FA at a later time, in case the code is delayed, but I forget for sure.
I can’t count how many people I helped to regain access to their computer login because of losing access to the method used to receive 2FA codes for Microsoft accounts, which is necessary to login if you have forgotten your password. The Microsoft account user setup won’t let you make a password-free login unless you use a local account, and short easily guessable passwords don’t meet their online account security requirements. Most people probably don’t want a Microsoft account if it has this failure mode, but people don't know the trade offs at the time of user account setup, and Microsoft uses that ignorance as leverage to get people signed into everything so that you will have have opted-in to all of this. It’s such an own-goal by Microsoft and it makes me feel for users who have no idea how any of this works. It’s a hard problem to solve, I’m sure, but it shouldn’t be like this.
The people who are most disadvantaged by the high tech highly secure thrust of modern tech are those who have the least skills with technology. Low skill users are also most at risk for scams and malware and other kinds of tactics, so I don’t mean to say that having no password is good. Having no password is a bad solution to the problem of computers being hard to use for many people, and they don’t know what they don’t know, so anything that they haven’t seen before is a cause for concern or alarm to their mind. Since most people have forgotten that they even have a Microsoft account by the time they have trouble logging in to their computer using one, they click around until they get to the account recovery, and then usually get their account locked because they can’t solve the security challenges that they never faced before or anticipated when doing the initial setup perhaps years prior.
Remembering where the setting is so if you want to update it again you do it on your own instead of calling tech support.
The old lady who calls tech support saying "half my screen is grey!" and it turns out she accidentally resized her taskbar to the maximum size.
> People simply will not go back to clunky software of the 2000s, regardless of the malleability or usability.
Software in the 2000s was markedly better to software today. But it's cheaper and easier for companies to produce shitty software, so that's what we get. It has nothing to do with consumer preference.
Training and support for applications isn't a thing outside of enterprises, especially for SaaS web apps. You simply cannot reliably get support for Google or Facebook services unless you know some very obscure channels. It is wrong to say this is a trade-off: it is a regression.
Modders tweak and change games they like pretty much all the time, whether they are malleable or not.
Decker absolutely enables malleable software: https://beyondloom.com/decker/
You can tell when a platform is succeeding at this by looking at its adoption among non-programmers.
In my experience, people yearn for coding and modding. Given the tools a whole bunch of people will do domain-specific miracles using macros and other tools. I'm almost convinced that teaching programming is easier than teaching software development boilerplate.
I really love those customization power charts and really happy to see that my anecdote-based thoughts might actually have some grounding behind them.
I was thinking a lot about software malleability - but from a technical perspective. I am on the verge of building something useful - only if I could find the time to do it.
Here's my premise - if you use something like a game engine, say Unity, and Unreal, you basically have the ability to modify everything in real time, and heve it reflected inside the editor immediately - you could change textures, models, audio, even shaders (which are a kind of code), and have the editor reload just that tiny resource instantaneously.
But not code code - for some reason computer code must go through a compilation, optimization and linking process, creating a monolithic executable piece of code, that cannot be directly modified. This is even true of dynamic languages like Js/Ts, which support modification on the fundamental level, yet somehow lose this ability when using advanced toolchains.
Which is weird since most compilers/OSes support this dynamism at a fundamental level - the machine interface unit of the C compiler is a function, the replacement unit in most OSes is a dynamic library, a collection of said functions - yet changing this at runtime is almost unheard of and most of the times suicidal.
This is because of a couple problems - memory allocation - replacing parts of a program at runtime can lead to leaks if we don't clean that up, resource allocation - this once again can be solved by tying resource lifetimes to either outside factors, or the lifetime of the function or its containing unit.
A demonstrated analog of this is OS processes, which can be terminated abruptly, their binaries replaced without fear of resource leakage.
The final problem of data corruption can be solved by making such program parts stateless, and making them use a store with atomic transactions.
I have a pretty good idea on how to build such an environment on the low level, whose core idea is having process-like isolation barriers isolating small pieces of programs, and an object database-like datastore that can never be corrupted due to transactional changes (which can be rolled back, enabling stuff like time-travel debugging). Said processes could communicate either via messages/events or sharing parts of their memory.
Such a system would allow you to fearlessly change any part of the source code of a running application at runtime - even if you mess up the code of a said component - say event to a point that it doesn't compile - all that would happen would that single component would cease to function without affecting the rest of the app.
But not code code
but that's not true: smalltalk, lisp, pike, erlang and some other languages allow you to change code at runtime, only requiring the recompilation of the changed unit of code (depending on the language. in pike it's at the class/object level)
process-like isolation barriers isolating small pieces of programs, and an object database-like datastore that can never be corrupted due to transactional changes (which can be rolled back, enabling stuff like time-travel debugging).
doesn't smalltalk do pretty much that? i'd be really interested in learning how your idea differs. you may also want to look at societyserver/open-Team: https://news.ycombinator.com/item?id=42159045
it's a platform written in pike that implements an object storage, and allows code objects in that to be modified at runtime. transactions are at the object/class level. (if the class fails to compile, the objects are not replaced). it stores versions of classes so a rollback is possible, although not implemented in the interface. (means right now, if i want an older version i have to rollback manually)
Such a system would allow you to fearlessly change any part of the source code of a running application at runtime - even if you mess up the code of a said component - say event to a point that it doesn't compile - all that would happen would that single component would cease to function without affecting the rest of the app.
smalltalk does that, as does societyserver/open-Team, or the roxen webapplication server (also written in pike) and i am pretty sure some lisp and erlang systems do as well.
Tbh, I have never heard of Pike or societyserver before, will check those out!
As for smalltalk, I am also not intimately familiar with the language, but what I have in mind is somewhat lower level, with emphasis on C-like struct layouts stored in a POD way (so raw structs inside arrays and the like).
I'd say a key difference is in my language (working name Dream (because I started the project as my 'dream' language, and picking names is hard)), is that these isolation contexts are explicit, and you pointers can't really cross them.
There are special 'far' pointers that do have the ability to reference external objects in different context, but there's an explicit unwrap operation that needs to happen that can fail, as that object is not guaranteed to be reachable for whatever reason. Processes can be explicitly deleted, meaning all reference operations to them will fail.
To be clear, when i say process, i mean my lightweight internal isolation thing.
So in summary, my langage is procedural inside processes, with in-process garbage-collection, C-like performance and explicit method calls. Between processes, you either have smalltalk-like signals, or you can do Rust-style borrows, where you can access objects inside the process for the duration of a method call.
It has erlang-like 'just let it crash' philosophy, but again is a C-like procedural language (or shall I say Go-like, since it has total memory safety and GC).
It also has familiar C-like syntax, and quite a small(ish) feature set outside of the core stuff.
I have a huge doc written up on it, no idea if it would work and if it did, it would be useful, but I do have some tentative confidence in it.
(Also no claims on being original or inventive.)
I have never heard of Pike or societyserver before
pike/roxen had a brief window of growth in the 90s but the leaders at the roxen company (not the devs) missed the opportunity to work with the FOSS community.
pike is fully C-syntax, and it is very performant, so that may be interesting for you.
societyserver is my fork/continuation of a university project called open-sTeam that stopped development more than a decade ago. i continue to use it and when i am not busy earning money try to work on it, but i haven't yet been able to build a community around it.
the process isolation you talk about sounds like something that erlang promises as well, but i don't know enough about erlang to tell. i'd be curious to learn more though.
open-sTeam/societyserver built an object-level access control system. method calls on others object are being intercepted and only allowed to pass if the caller has the necessary permission to access that object.
it's not process isolation, but also a concept i find interesting
Things like this exist. They're not that useful in practice, because it's like live editing PHP directly on the server, only more so. Most of the value of edits to a piece of code doesn't come from running it once, it comes from having that edit in a durable, managed place. And however much effort you put into your code management database, it's hard to beat the amount of tooling that exists for the worse-is-better "mostly-ascii files on a unix filesystem" model.
I mean, Tcl/Tk has had this since the 90s. Rewrite your procs (functions) on the fly, delete GUI items on the fly, generate new events, create listeners on the fly, etc, etc.
Quite easy to create a GUI that's interactive AND a console you can script on at the same time to inspect / edit / change code.
For example, don't like your window attributes? Write code to destroy it, and re-create it and keep your "live" data unchanged, and it will redisplay in the new style / layout.
And sure, you could code up atomic transactions quite easily.
Itcl even lets you create / add /remove classes or specific class instances on the fly, or redefine class methods.
I concur that TclTk is enormously versatile and productive. I've always regarded Tcl as a Lisp-like language. Tcl and Lisp share characteristics like homoiconicity that enable benefits you describe.
Tcl isn't as widely known and used as it deserves to be. I think that's in part due to its syntax being sufficiently different from "mainstream" languages. The learning curve isn't particularly steep, but enough so that developers question whether it's worth the effort to go there.
FWIW Tcl 9.0 has recently been released. The language has been enriched with sophisticated object-oriented capabilities, coroutines, full math tower, etc. It's also rather easy to write extensions in C.
Anyway, the GUI toolkit (Tk) has been "borrowed" by many other languages (e.g., Python's tkinter), so quite a few programmers use TclTk, know it not.
Never claimed to be innovative, but sadly all these cool features are nowhere to be found in modern languages. And for some reason, they never appeared in a fast(ish) language, even though I'm sure the JVM is very well equipped to handle this kind of dynamism.
Recompiling a method[1], popping the stack frame, and re-entering the new method is a very, very common debugging pattern on the JVM. I miss it every day that I'm on vastly dumber platforms
https://www.jetbrains.com/help/idea/altering-the-program-s-e... -> https://www.jetbrains.com/help/idea/pro-tips.html#drop-frame
1: pedantically, you're recompiling the whole class, but usually it's only one method changing at a time unless things are really going bananas
DCEVM (RIP) allowed swapping the method signature, too, but that is a lot more tricky to use effectively during debugging (e.g. popping the stack frame doesn't magically change the callsite so if you added extra params it's not going to end well) e.g. https://github.com/TravaOpenJDK/trava-jdk-11-dcevm#trava-jdk...
For anyone interested in this: Tsoding (Twitch and YouTube streamer of "recreational programming") demonstrates this in one of his projects, where he hot reloads a dynamic library without interrupting the main program, to test different functionality.
https://youtu.be/Y57ruDOwH1g?si=feGioEeSZ5eborb3&t=84
Here's an operating system kernel in Rust that can hot load/unload modules at ELF object boundaries, made safe by trusted compiler allowing only safe Rust:
https://www.theseus-os.com/
Upgrading a module in a way that changes its data structures still requires writing a converter from the old format to new.
This is cool and all, but the problem with doing this in C, is if you accidentally do a memory corruption bug while you're just messing about with the code, now you're forced to restart.
Not a problem in a toy app, but in something like a huge program, it can be a PITA to reload everything and get back to where you were.
Yeah, C really does not support this as a language (or implementation) feature - it's something that can be hacked in, with a lot of difficulty, inconvenience, and loss of safety guarantees.
eBPF for apps?
I quick shout out to a light in the dark (in the bio-imaging space)
https://napari.org/stable/
I love napari. I remember downloading it on a whim, and while poking around, I accidentally opened its built-in python console. Half the time, if I'm writing a plugin for it, I open up the console just so that I can play around and print out stuff and try new things.
Everything, even the viewer itself, is accessible from the repl. Nothing hides behind a black box.
I wish that the first semester of programming class deliberately left code out of the material. IMHO students should start with something like this short list:
Then move on to what programming could/should be: Only then, after having full exposure to what computers are capable of and how fast they really are, should students begin studying the antipatterns that have come to dominate tech: I realize this last list is contentious, but I could go into the downsides of each paradigm at length. I'm choosing not to.Since we can't fix the market domination of multibillion companies who don't care about this stuff on any reasonable timescale, maybe we can pull the wool off the children's eyes and give them the tools to tear down the status quo.
I suspect that AI and geopolitical forces may take this decision away from us though. It may already be too late. In that case, we could start with spiritual teachings around philosophy, metaphysics and wisdom to give them the tools needed to work with nonobjective and nondeterministic tech that's indistinguishable from magic.
This kind of list may be right for a trade school. If that's what you're referring to, then I don't disagree. Those students want to learn how to use those tools.
But if the class is computer science at a university, then the students want to go deeper and learn how to improve upon and compete with the existing tools. They need the theory first, which means Lisp (or a derivative) and an imperative language.
Hey you're right, and I agree!
In my freshman year of college, I thought I was hot stuff because I knew C++, so I tried to place out of some of the 100 level classes. But the test seemed strange to me, focusing more on abstractions than syntax. I don't remember if I failed, but I don't think I placed out of anything. One of my first classes was on Lisp, specifically Scheme, and it completely blew my mind and forever changed how I look at programming.
Just before I graduated in 1999, they started transitioning to Java, because the web was so popular. But most of us thought that was a mistake. I don't know if they ever switched back to Lisp.
On a funny note, I took that whole class without realizing that Lisp statements could be broken up into separate lines. Or more accurately that each line just declares equivalences that get reduced down to their simplest form by the runtime. So I wrote all of the homework assignments as one giant function of nested parentheses, even for some of the more complex tasks on sorting primitives like lists and trees. I picture the graders shaking their heads in a mix of frustration and awe hahaha.
*data structures not primitives. My brain is mush.
Rather than raw PostScript, I would suggest METAPOST --- it's a lot more approachable, and with mplib as part of luatex, far more approachable (no need for Ghostscript or distilling to PDF).
I would swap and place assembly or C before the erlang, etc step
I agree with all of the assertions about what software should be.
But... I think a lot of it already is customizable, and users don't want to configure. End-users (or doctors) hate having to learn more about software than they absolutely must. Just an example, Epic (EHR from the essay) definitely has the ability to mark fields as optional/required. Someone just needs to get in and do it, and they don't want to/know how.
The inaccessibility of config to laypeople may actually be where AI shines. You prompt an in-app modal to change X to Y, and it applies the change. A natural language interface to malleability.
This. Making something super customizable is a lot harder to implement (code being too generic, hard to reason about and debug) and often presents a worse UX ("why are there so many options??"). Having the UX design team interview and consider the needs of each user role interacting with the application, and ensuring the app displays/asks only the appropriate info for each user, hiding the rest and adopting smart defaults (instead of requiring everything), is easier to implement, safer and produces more intuitive interfaces than highly customizable ones, in many cases.
Creating browser extensions is easy enough. It's hacky web dev which I very much enjoy. The problem relies in distribution.
Yes, nowadays all the main browsers are pretty much locked down and you have to use the official app stores to sign and distribute your extensions, even if it's just something for your own use. I really wish this would be more open, since extensions allow for so many cool usecases because they don't have all the same restrictions that regular webpages have (CORS for example).
I remember having this idea in undergrad in 2011. My big wish was that every app would ship with a scripting language or an api. The problem is that it’s not at all straight forward to do this. The more complex an app, the more important a facade (like a front end) becomes.
Apple did this back in the mid 1990's (before OSX) with AppleScript. Every application was supposed to ship with metadata that described its object model along with methods that could be invoked on them. AppleScript was an sort of a protocol or interface standard that allowed scripts to automate application actions (and more) without having to use GUI macros. Scripts could be written with a variety of syntaxes. It was pretty cool. However, it turned out that providing an object model and API surface was a pretty heavy lift for application developers and most just half-assed it. And while a fairly robust community developed around AppleScript, it was too small to generate any noticeable uplift in sales for either Apple or independent software vendors. Thus not really commercially viable.
And Microsoft has had OLE -- which is sort of analogous to the object-model portion of AppleScript -- for ages.
My ideas about how to fix this (with a new operating system design) involves:
1. Use of FOSS will be helpful, since it can be improved if something is wrong with it.
2. UI controls are objects with data models like any others are, so even if a API is not provided by a program, these UI controls, and the data associated with them, can be added into scripts like any other API can be.
3. Capabilities are needed for I/O and proxy capabilities can be created and used. Even if the program does not expect the I/O to be filtered or modified, the system forces that it can be done anyways (and the command shell in the system is designed to allow this, too).
4. This metadata is required even for a program to start (due to the way the I/O is working).
Yeah I got to AppleScript during its dying days.
The fundamental problem with customizability is that code path complexity is exponential on branching. So it works as long as your app is sufficiently simple, but eventually, the exponents catch up and eat ur software alive.
I think that if the entire computer and operating system are designed better, and the software is designed better, then there are things to be done which would improve the customizability and other things. (I mentioned some of my ideas in some other comments.)
FOSS also helps, but just because it is FOSS does not itself help (and is mentioned in the article), but it is one of the things to be done, too.
UNIX programs with pipes is also one thing that helps, but it is not quite perfectly. Nevertheless, writing programs that do this when working with UNIX systems, is helpful to do. (For working with picture files, I almost entirely use programs that I wrote myself which use farbfeld, and use pipes to combine them; I will then convert to PNG or other formats when writing to disk (I do not use farbfeld as a format to store pictures on disk, but only as the intermediate format to use with pipes).)
Not directly related but I'm sad when I fire up my old Windows phone that I bought for nostalgia and it doesn't work, other than the base OS maybe the old bing.
I get it too, world moved on, people have to manage APIs, updates... but yeah.
There must be a law forcing them to open all specs, such that enthusiasts would be able to use the device. It should be criminal to force the planned obsolescence and create e-waste like this.
Agreed. stopkillinggames.com is trying to put a stop to this by ask devs to build in a end-of-life plan for games that are retired.
So what is it like? VBA? Hypercard?
Most people just don’t have the skills or inclination to tinker even with ham radios or cars.
On the other hand with the right to repair, you could call a repairman. And now — an agent or robot!!
I made an app that lets you build one-off desktop utilities: https://viberunner.me
https://news.ycombinator.com/item?id=44236729
Ultimately I think it’s too open ended. Users got overwhelmed with a chat interface and couldn’t think of something useful to build on the spot.
Maybe a slow burn approach works best.
After more than 2 apps get involved with anything I’m back on paper. It still works well :)
I do think that they are right about many things, although I have my own ideas about how to improve them (and I do not agree with all of the ways they (Ink & Switch) are doing with it).
In UNIX systems you can use pipes between programs (if the programs support that; many modern programs don't support it very well), although there are still problems with that too. (I also disagree with the idea that text (especially Unicode text, although the objections apply even without a specific character set) would be the universal format.)
My idea of a computer design and operating system design is intended to do things which will avoid the problems mentioned there (although this does not avoid needing actually good programming, and such things as FOSS etc still have benefits), as well as having other benefits.
Some of the features of my design are: CAQL (Command, Automation, and Query Language), UTLV (Universal Type/Length/Value), and proxy capabilities. (There are more (e.g. multiple locking and transactions), but these will be relevant for this discussion.)
Like OpenDoc and OLE, you can include other kind of things inside of any UTLV file, by the use of the UTLV "Extension" type. The contents of the extension would usually itself be UTLV as well, allowing the parts to be manipulated like others are, although even if the contents isn't UTLV (e.g. for raster images), you would have functions to convert them and to deal with them anyways, so it will still work anyways.
With those things in combination with the accessibility (one of the principles is that accessibility features are for everyone, not only for the people with disabilities; among other things this means that it does not use a separate "accessibility" menu) and m17n and other features, you can also do such things as affect colours, fonts, etc, without much difficulty. (They might not seem related at first, but they are related.)
I had also recently seen https://malleable.systems/mission/ which seems to be related (you might want to read this document even if you are not interested in my own comments). One part says, "If I want to grab a UI control from one application, some processing logic from another, and run it all against a data source from somewhere else again, it should be possible to do so.", and with CAQL and UTLV and proxy capabilities, this can be done easily, because the UI controls are callable objects (which can be used with CAQL) like any other one, the data source can use UTLV (which can be queried and altered by CAQL), and the interaction between them can use proxy capabilities.
The challenges here feel insurmountable, but I can't help but feel there's a certain inevitability to the de-monolithization de-totalization of the domain of computing being ensconced so wholly as it is inside of applications, experiences purely pre-defined by a given app.
Already with AI we are seeing a huge uptick in people's expectations that agents operate across apps. The app is losing its monopoly of power, is losing its primacy as the thing the user touches. See How Alexa Dropped the Ball on Being the Top Conversational System on the Planet which is an article about a lot of factors, many more Conway's Law & corporate fiefdom oriented, but which touches repeatedly in the need to thread experiences across applications, across domains, and where the historal "there's an app for that" paradigm is giving way, is insufficient. https://www.mihaileric.com/posts/how-alexa-dropped-the-ball-... https://news.ycombinator.com/item?id=40659281
AI again is an interesting change agent in other ways. As well as scripting & MCP'ing existing tools/apps, the ability to rapidly craft experiences is changing so quickly. Home-Cooked Software and Barefoot Developers talks so directly to how this could enable vastly more people to be crafting their own experiences. I expect that over time frameworks/libraries themselves adapt, that the matter of computing shifts from targeting expert developer communities who use extensive community knowledge to do their craft, to forms that are deliberately de-esotericized, crafted in even more explicit compositional manners that are more directly malleable, because ai will be better at building systems with more overt declarative pieces. https://maggieappleton.com/home-cooked-software/ https://news.ycombinator.com/item?id=40633029
Right now the change is symbolic more than practical, but I also loved seeing Apple's new Liquid Glass design system yesterday, in part because it so clearly advances what Material set out to do: constructs software of multiple different layers, with the content itself being the primary app surface. And in Liquid Glass's case extending that app surface even further, making it practically full screen always, with tools and UI merely refractive layers above the content. This de-emphasizes the compute, makes makes the content the main thing, by removing the boxes and frames of encirclement that once defined the app's space, giving way to pure content, making the buttons mere layers floating above, portals of function floating above, the content below. In practice it's not substantially different than what came before, yet, but it feels like the tools are more incidental, a happenstance layer of options above the content, and is suggestive to me that the tools could change or swap. https://www.apple.com/newsroom/2025/06/apple-introduces-a-de... https://news.ycombinator.com/item?id=44226612
There's such a long arch here. And there's so many reasons why companies love and enjoy having total power over their domain, why they want to be the sole arbiter of experience, with no one else having any say. We've seen collapses of interesting bold intertwingular era API-hype hopeful projects, like Spotify desktop shutting down the amazing incredible JavaScript Apps SDK so long ago (2011-2014). https://techcrunch.com/2014/11/13/rip-spotify-apps-rip-sound...
Folks love to say that this is what the market wants, that there is convenience and freedom in not having any choices, in not having to compose tools, in everything being provided whole and unchanging. I'd love to test that thesis, but I don't think we have evidence now: 99.999%+ of software is built in the totalistic form, tablets carved and passed down to mankind for us to use as directed (or risk anti-circumvention felony charges!). We haven't really been running the experiments to see what would be good for the world, what would make us a better happier more successful world. Whose going to foot the bill, whose going to abandon control over their users?
And it's not something you can do alone. The really malleable software revolution requires not individual changes, individual apps adding plugins or scripting. The real malleable software shift is when the whole experience is built to be malleable. The general systems research for operating systems to host not just applications, but to host views and tools and data flow, history event sourcing and transactions (perhaps). No one piece of software can ever adequately be malleable software on its own: real malleable software requires malleable paradigms of computing, upon which experiences, objects, tools compose.
It all sounds so far off and far fetched. But where we are now is a computing trap, one posited around a philosophy of singularness and unconnectedness delivered down to us users/consumers (a power relationship few want to change!). The limitations of the desktop application model, as it's related and been morphosed into mobile apps, into watch apps, feels like an ever more cumbersome limit, a gate on what is possible. I feel the dual strongly: I'm with those pessimists saying the malleable software world is impossible, that we can never make the shift, I cannot see how it ever could become, and yet I don't think we can stay here forever, I think the limitations are too great, and the opportunity for a better opener computing to awaken is too interesting and too powerful for that possibility to lie slumbering forever. I want to believe the future is exciting, in good ways, in re-opening ways, and although I can hardly see who would fund better or why, and although the challenge is enormous, the project or rebuilding mankind's agency within the technological society feels obligatory necessary & inevitable, and my soul soars at the prospect. Malleable software: thus we all voyage towards computing.
I agree, I feel like the authors are underestimating the effect the new AI is already having on the concept of local software crafting. For my entire lifetime, I've had friends ask me to help them build software that accesses some data somewhere, and I've always had to turn them down because there are too many unknowns.
I've spent countless hours thinking about how to build a business that would solve some class of problems my friends have encountered and I've almost always had to conclude that the business would probably not be profitable, so their ideas were never tested.
Now, with a 2025 chatbot, I can confidently estimate the feasibility of a basic project in minutes and we can build the thing together in hours. No one needs to make a profit, build a new business, or commit to ongoing maintenance. Locally crafted software is taking off dramatically and I think it will become the new normal.
> I agree, I feel like the authors are underestimating the effect the new AI is already having on the concept of local software crafting
Coauthor here -- did you catch our section on AI? [1]
We emphatically agree with you that AI is already enabling new kinds of local software crafting. That's one reason we are excited about doing this work now!
At the same time, AI code generation doesn't solve the structural problems -- our whole software world was built assuming people can't code! We think things will really take off once we reorient the OS around personal tools, not prefabricated apps. That's what the rest of the essay is about.
[1] https://www.inkandswitch.com/essay/malleable-software/#ai-as...
Yes, but I think we have a somewhat different idea about the market forces. My impression from your essay is that you believe app developers will add APIs that enable personal tools, and only then will local software crafting take off.
My belief is that it is happening already: local software crafting is happening now, before the tools are ready. People aren't going to wait for good APIs to exist; people will MacGyver things together. They'll scrape screens (sometimes with OCR), run emulated devices in the cloud, and call APIs incorrectly and abusively until they get what they need. They won't ask for permission.
A lot of software developers may transition from building to cleaning up knots.
Also your two year old post, Malleable software in the age of LLMs, https://www.geoffreylitt.com/2023/03/25/llm-end-user-program...