Lending to firms and individuals engaged in the production of goods and services – which most people would imagine was the principal business of a bank – amounts to about 3 per cent of that total.
The article calls out Gary's Economics. I would second that suggestion. Excellent channel! The linked video is his most popular and a great place to start.
I also really don't like many of the reconstructions because they're garishly over saturated. It's actually pretty simple to understand if you just explain it as a skill issue. I imagine it like the Ecce Homo restoration. Combine that with how, like dinosaur or neanderthal reconstructions, you have to either be too literal or too artistic. Both are wrong, but having no nuanced context, people take the first one they see as an absolute truth and then all future explanations are pitted against that exposure.
I like that we attempt to synthesise what we know, but I wonder if reconstructions are more helpful or harmful to understanding. For example, the skin on bone reconstructions of dinosaurs depicted in Jurassic Park form much of what people think they know about dinosaurs. Those are really unlikely what any of them looked like though. For one thing, we have evidence for feathers. For another, if you go in reverse, drawing modern animals in the same style based only on their skeletons, you get similarly wild reptilian results. Do these reconstructions help?
The title of the video, the premiss, the cover image… It's definitely not doing the best sales pitch here. I know it's not normally something I'd post here. Why post it then?
A bunch of my friends struggle with dating. What Adam's put together here is actually really great! Especially since he wrote an entire book and then just decided to give it away for free. It's unfortunately a Google Doc and I don't feel comfortable sharing a PDF version without his permission. Seriously though, check out How to survive dating in the 21st century. It's actually pretty great dating advice. Maybe not perfect, but a hell of a lot better than a lot of the advice grafters will try and sell you. Seriously, I'm posting this here so I can point my friends to go read it. Check it out!
This just sounds like rapid application development (RAD) again. (Everything old is new again.) Nothing against it, but it's been kind of funny watching the vibe coders essentially reinvent Visual Basic or Dreamweaver. Not a knock against either of those. It's just as you hear the argument go on, you get back to, "Why am I using a language at all?" Yeah, that's what RAD was all about. Then you have to ask, "Why isn't that the dominant way we program?" Could just be those older tools weren't as good as vibe coding is. From my experience though, it was because they suffered at scale, in stability, and labour shortage.
On the scale side, as the thing gets bigger and bigger, you need to actually understand and control what the computer's doing. I got my first job building and maintaining a Microsoft Access database. Very easy to click and drag a button or dropdown in, pick the table it loads from or add a bit of VBA for an element's event handlers. The problem is doing naive things like loading every record in the system into a dropdown element or doing a ten way join between tables to run reports. Not using transactions properly in batch operations and rollbacks. Trying to use a shared network drive as a client server protocol and then trying to manage serial write isolation by shouting down the hall. Add to that inexpertly and constantly trying to wrangle incompatible date parsing between machines by guessing and testing OS setting changes. You get the idea. All of these were solvable, but I didn't really understand everything I was doing. Looking back, there's really no way to fix all this stuff now that I do without a significant rewrite. They work great when you first build them and then fall apart a few years later when the system has thousands or millions of records.
The stability part stems from that. You've got someone building without really understanding what they're doing and the system's stability reflects it. I'd routinely leave work with the system not properly working. I'd ship a broken update a couple times while working on it. Lots of regressions. The business put up with it because I was making minimum wage and sunken cost fallacy makes cutting your losses hard. Now imagine a company like SAP or Oracle shows up with a tool that does roughly what your in-house solution was doing but has real robustness. It doesn't break every other day. It's expensive, but results in a lot more reliability for managing all the data your company has to deal with. You don't need an expert on the tool, you can just train staff who already handle more direct business value to work with this tool instead of your in-house thing. It's not as seamless or streamlined, but it's close enough and doesn't involve magic incantations by one person in the whole company able to fix it when things go wrong.
The labour shortage is an interesting one. We went through about a twenty year period where software developers were hard to come by. Businesses wanted more of them than we had in the economy. That meant learning to code could get you a really nice salary. Right now it looks like we're past that. There's now tens of thousands of unemployed programmers just in the United States of America. I left that job for a number of factors, but I'm not currently eager to go back to a minimum wage job managing an Access database right now. What does this mean? Not sure yet. Might even be unrelated. I left and then years later was brought back to quickly fix a report. I fixed it in about thirty minutes and billed an hour. Nobody's touched the system since I left. Nobody besides me understands how the thing works. They've all just learned to use it to do their job. With vibe coding, going back, I wouldn't have even understood it. Food for thought.
Both of the installations mentioned are actually pretty cool.
Still not a fan of how high-end physical art is basically valuable in so far as it facilitates tax evasion and money laundering. Still upset at all the assholes who used and continue to use the technology to destroy the lives of working class people. Still think fanatic techbros are something we as a society really need to address at some point. Still think buying a receipt for a link to a JPEG is absurd. Though I guess "certificates of authenticity" and the similar "deeds to real estate on the moon" have been with us for a while.
On the other hand, an extremely convoluted way to reopen those financial loopholes (as it would have to be) has provided a genuine window for high-end physical artists to make a lot of expensive art installations feasible for a little while without as many middlemen taking a cut. That's pretty cool. All the also-rans with ugly random character generators finally went home and now there's some space to appreciate the real work that's been going on.
I enjoy Python and JavaScript exactly because their most popular runtimes include a REPL and by extension, builtin debugger. That alone provides many of the live coding features he's talking about. They're not perfect, and TypeScript's running the other direction in the name of boxing people into VSCode, but those examples can help if you're unfamiliar with languages and runtimes from Clojure, Erlang, and Smalltalk.
But it's slow... The refrain I can hear many people saying is, but these languages are slow. No, the language is an abstract concept. You're confusing the runtime or build artifacts of a given compiler with the languages. To be fair, the talk also blurs the two for shorthand. Cpython is currently really slow. No debate from me there. PyPy is much faster. Nuitka is faster still. And Numba is likely faster than what you'd program in Rust or C++ and takes a fraction of the time to code. Idiomatic language use can also lead to slow code, but that's a skill issue.
No, the language runtime being slow isn't generally an issue. There are billion dollar companies running Python at scale in production. It just drives up the hardware costs and limits what you can do on a single machine for a reasonable price in a reasonable amount of time. A reasonably priced computer can still do an insane amount of compute though. Erlang programs run a large share of all the computers in the global telephone network. Many games grossing a million dollars or more have been built in Python, Lua, and GML. That said, why waste the compute unnecessarily? I'm constantly drawn to Jai, Zig, and Odin entirely because you shouldn't need a runtime penalty in a production environment.
What I'd really like is the ability to split production from development further when it comes to runtime. Build systems often already have debug and release builds. Zig even has the concepts of ReleaseSafe and ReleaseFast. If you then think about a concept like attaching to a running production system with your runtime, or the ability to both interpret and compile your language as truly first class primitives. Really design the language around iterative highly introspective systems with injectable runtime binding.
An ideal would be a zero cost development runtime. That is, you only pay the runtime cost when you attach to the program. You might object that production builds do a bunch of optimizations. Sure, but what's to say you can't hijack the control flow and feed it to an interpreter? Then you can run the source code that went into those optimized routines in a manner that provides full reflection, introspection, and manipulation. Even if a dozen functions have been inlined and fused, you could run an interpreted version of those by patching the landing sites. The environment would look exactly like you'd dropped a import pdb;pdb.set_trace() but it would be dynamically patching the optimized code paths to the interpreter. I'm sure it's way more complicated to get the optimization scoping right, but there's likely a reasonable tradeoff to scoping since optimizations usually need to bound their search space anyway to prevent combinatorial explosions.
Now we go a step further. You can iteratively program some of that code. Like incremental builds but without having to restart the process. Bring in some of that Bret Victor, hot dispatch looping. Add some of those automatic data structure visualizations by having more standard library data types with first class debugging visualizations. That's kind of what Python's __repr__() class methods were all about, but I've more often seen them abused in ways that make debugging harder by messing about hiding things for aesthetics sake instead of easier by gathering and presenting the rich inner workings of a given data structure.
The ability to essentially connect to your application like you're SSHing into it. Able to run individual functions through the live interpreter, to build them up in a editor and debug them live. The ability to address what Erlang would term processes, but which could be called threads, coroutines, or workers. Monitor their call graph, interrupt them with breakpoints, kill them, spawn new ones, or blue-green upgrade them in-place, all using the attached runtime harness. Almost like independent processes but with robust IPC.
Zig, Jai, and Go took a small step forward with their build system being able to quickly compile and run the program, but yeah, still not great at scale. Builds in very large projects can still take a while, especially as the number of generics expand. In these languages I haven't yet experienced full Rust or C++ level hour long build cycles, but they're still not instant. Not being able to attach to the compiled program, introspect and modify it is really limiting. I shouldn't be tempted to add print statements and restart the program. I should be able to start the program once when I sit down to work and then edit, debug, and evolve it while it's running. Not hot reloading, but proper multiprocess software development. That probably also means state snapshots and time travel debugging. I'm only usually working on a handful of codepaths at a time. The rest of the program should be running as a fully optimized build with only sections in this interpreted slow path.
If any of this sounds cool, check the talk. It's not a talk about what languages you should be using, it's a talk for language developers to really consider looking back at what cool things already exist. Making the case that if your language doesn't have these sorts of features, you're spending a lot of extra unnecessary time fiddling with all the work you have to do around the actual work.
This is exactly what I've been talking about when I say you need to practice more. This! This isn't just about drawing. It's about practicing your craft. Its about how you become great at anything. Programming, music, art, speaking Spanish, playing basketball, whatever. You just need to do it. The more you practice, the better you'll become.
The internet has made measuring yourself, of getting bogged down in the meta, and endlessly distracting yourself with theories and critique, far too easy and fun. Stop sharpening your axe and just start cutting shit. Sure it'll be rough and ugly. That's how you get better. You cannot just think yourself better. You have to practice.
He's right on the money complaining that technology has held many people back. Too many people who decide they really want to learn a skill get distracted until they give up because the internet talks about all the technology they supposedly need. It creates this artificial impediment to actually doing the thing. They need a stylus hooked up to Photoshop with the right set of brushes, or need a MIDI keyboard hooked up to a DAW with the right set of VSTs, instead of things as cheap and easily obtainable as a pencil and some paper or a used guitar or keyboard. Heck a stick and some mud or a drumable surface is all you need.
For programmers it's getting all wrapped up trying to download and use VSCode or trying to install Linux or whatever instead of opening a simple text editor and just hacking out a script to automate something or make a little webpage or game.
The best part about a pencil or an instrument is that the moment you start interacting with it, you start to create something. Something ugly and raw, your first time doing anything is going to be awful, but you quickly see the possibilities. It's real. You aren't faffing, you're failing, and failing is the first step to improving.
Sure, after you're practicing all the time, you can start to improve on the process. Use resources to learn new things to try. But I often tell people considering a gym membership to start by going for a walk every day. If you can't commit to a walk, be honest, you're not really going to commit to the gym. Same goes for practice. If you just start by watching lots of videos or buying some fancy gear or whatever, you're not likely to stick with it. You spent a bunch of time and money but you're still right where you started. You'll know how to talk the talk, even look the part, but you still can't walk. You won't be able to create anything of your own worth celebrating until you're regularly practicing.
So please, just start practicing. It's a much better use of your limited time on earth compared to spending it consuming things other people make. The things you've made will outlive you. The things you've consumed or only thought about die with you. Change only happens when we physically do things, create and build things. Not for others, but ourselves. Create things you enjoy. Share them if you like. But do it first and foremost for yourself because making things makes us happy.
I'm some goober typing words in a text editor and you're now reading them. That's the power of creation. And very rarely, someone like you might really enjoy that but we can't enjoy anything if you don't create something. Draw the pictures you want to see. Play the notes you want to hear. Make the videos you want to watch. Build the furniture you want to use. Write the works you want to read. Fabricate the cloths and accessories you want to wear. But start small, make it a habit, and create things because nobody can stop you.
I have no practical application for this knowledge, but it's one of those really cool obscure bits of Windows internals. Like knowing about alternative data streams in NTFS, or how you can't normally name files things like con, aux, or prn, because path resolution has devices like these globally available. Why? Because Windows has its roots in CP/M and takes backwards compatibility very seriously.
I think there's room for improvement in simplifying this design and using more mature solutions in some places. That said, the basic idea holds brilliantly! As the public internet gets carved up, just consider running your own proper routing again. Don't just rely on a single default route to an ISP to route for you. The most ideal solution would maintain many exit nodes and geographically route to the node with the closest proximity to the destination. This would have the added benefit of reducing the degree to which any service provider gets visibility on your traffic, not just your ISP. Every site would only see the exit node closest to them and couldn't really share data between other sites in other locations.
This would essentially have the effect of rolling out as close to true point-to-point network encryption as you can. If you had an EC2 instance in each availability zone, you'd nearly be directly peered to a vast majority of websites. Probably don't even need that many. us-east-1 only has 5 AZs and the recent outage showed that most sites are all just located there. You could also do the same for GCP and Azure and get really close to total coverage. Going further you could use a few VPS providers to cover a few of the other major exchanges and providers like Cogent, Level 3, Equinix, IX.br, and DE-CIX.
In the game of everything old is new again, this kind of reminds me of how some BBS sysops would pay the costs to dial long distance to trade messages between boards for things like FidoNet. Then other boards locally could just pull using lower cost local calls. Here the expense is anytime you're having to rent servers elsewhere. Following that, the next layer here becomes peering. You and your friends could also peer your networks to share these.
I unfortunately couldn't find it, but I remember reading about a group of operators that run their own internet specifically so they don't have to harden things like game servers against all the attackers you find on the public internet these days. I didn't bookmark or share it because their requirement for joining is that you have to know one of them personally for at least a decade specifically so that doing something nefarious has serious social damage attached. However, that paired with this, could make a really great way to rebuild access to more exotic networks as the tide of nationalism settles in.
Their example of adding a menu item hits way too close to home. I do a lot of work in my day to day that involves working with existing code. The worst code I deal with is all exactly like this example. It's all about how cool it looked when it was being designed on paper. Lots of cool design patterns that only a galaxy brain software architect could come up with. Lots of time spent making it "flexible" and "future proof". I always say, if you can future proof, you can predict the future, so please stop writing code and go become a day trader.
Please learn the grug brain way sooner rather than later. The industry will thank you. Focus on making it deletable. That means never reuse functionality that isn't guaranteed to be identical in all possible use cases, forever.
Write code so the implementation is easy to get to with your editor's goto definition feature. This means microservices and externalized libraries are inherently a bad idea until proven otherwise. Extraordinarily little of the reusable code you write will ever be reused by anyone other than yourself. Not even your team, just you.
Make things easy to search for. Structure the code like the structure of the application. Call things what they're called by users, even if this means going back and renaming them once a common vocabulary comes about. You shouldn't have to know where things live. They have names for a reason. Calling them controller, entity, factory, or tree doesn't help me. Don't talk about the code, talk about the problem.
Don't dump everything from one module into another with * imports. Modules are there to fix the biggest problem with C code, where every name lives in a global namespace. Importing things directly into your current module from another means you have to scour the imports to figure out where a thing lives.
Nobody is ever going to change that config. Just hard code it. It's fine to just make it a constant at the top of the function or file. Also, that magic numbers advice you got really was about just that, numbers. If you have something like httpGet(url, 200), naming that 200 by defining TIMEOUT_MS = 200 on the line above helps add context. Strings generally don't need this. {"username": User.username} isn't improved by having USERNAME_KEY = "username" in a constants module. In context they're usually self evident. Centralizing things or creating interfaces means I'm always yelling, "Yet again, the implementation is in another bloody castle!"
All that summarized, great talk. I think more programmers should spend time maintaining an existing project early in their career. We might have fewer systems with high minded architectures and more systems with pragmatic navigable structures instead. Or not. This is all just my opinion after all. Experienced opinion, but an opinion none the less.
The metric from my experience seems to be, the more management talk about their methodology, the worse your life will be. Pragmatic people usually don't think they have methodology, they just stop doing what isn't working and do more of what does. They do this even when they change contexts and what used to work doesn't anymore. The opposite of pragmatism is dogmatism, the belief that failure is a sign that you're not doing what you assume will work hard enough despite all the evidence.
I agree with Zed Shaw who just says the only methodology you need is: make a list of things to do, and then do it. He missed a couple parts in bigger teams. First, the list must have an explicit order to prevent re-prioritization power games. Second, you need to compare lists between teams regularly to find dependencies and sequence work ahead of time. If you want to know how long the current scope of work will take, measure it. #NoEstimates If that's too long, start cutting scope.
It's interesting when you think about Dave's description of agility. It's once again a reformulation of the age old DADA loop (Data, Analysis, Decision, Action). I keep seeing it come up again and again under so many different names and formulations. I'm starting to think this is just how intelligence works at the most basic level.
Also love how this detours into control theory briefly. Seriously, go learn control theory. It's unbelievable that nobody in any of my formal education ever taught me control theory before I got my degrees. It's so incredibly applicable to everything we do in software. It's also full of math, and institutions love math because it's easy to justify in the curriculum. If you've never studied it, go learn it. You can thank me later.
SS7 and GSM continue to haunt us. I was already pretty familiar with how you can use ATI messages like this at the carrier level to track anyone you want. Many cellular carriers in the USA already make this feature available for pretty cheap to police department email addresses. Turns out there's a company that's been caught basically running SigPloit on a pretty industrial scale across carriers because carriers have few if any access controls or firewalling on their interconnects.
I'm genuinely shocked I forgot to share this here and had to come across it a second time before sharing. A lot of great synthesis of ideas, events, topics, histories, and more here. Very well worth the read. It's exciting to see this energy transition happening, but wow are these sorts of global shifts destabilizing.
First and foremost, if you like dark mode, if you like light mode; you do you, friend. I do not want to persuade you to change your aesthetics. I'm just always trying to learn more about the world. To better understand what's going on and what we think we know about it as a species. Easily one of the best things from this talk is knowing about the terms positive and negative polarity when talking about this. Using those terms to search for information really seems to help.
That out of the way, hope you find the talk fun and informative.
First of all, love this design. Retro-web is such a fun metamodern punk aesthetic. Every one of you make me smile when I load pages like these. Bonus points when it's blended the aesthetic with modern information design to make it still easy to follow and parse.
For the record, I don't support this move. Breaking backward compatibility as a systems programmer is just burning down other people's work. Without an immensely good reason it's just arson.
How'd I call it? I was looking at the condition of the MDN docs around these coupled with what I know of software pop culture as a sign that nobody building browsers thinks XML is worth supporting anymore.
I'm not one for the conspiracy presented here though. I think it's just out of fashion which has become the arbiter of all that matters in software development. "Is the brand of the things I build, and the brands of the things I build with in fashion or not?" It's rich kids keeping up with the Joneses.
It always makes me roll my eyes when the people responsible for this sprawling mess of what is technically the web "standards" go on parades around small corners of the swamp to exclaim, we're cleaning up www subdomains! Never mind they've stuffed an entire Bluetooth stack, DRM, multiple database technologies, over two dozen media codecs, shader programming and more into the web ecosystem. Those three characters gotta go.
Same here. Doesn't matter that browsers still have to parse and support XML by way of XHTML. It's XSLT processing that broke the camel's back. Better force every piece of software that's used that spec into costly rewrites because I don't like it.
RSS will still work just fine in most readers. I think even most browser based readers are doing backend processing of the feed text to turn it into HTML instead of using the browser's XSLT parser to render them. I could be wrong though. Sorry I don't have any good reader suggestions here. I just built and maintain my own for personal use.
If you've been using XSLT on your feed to make it look great in a browser, or worse still, built entire mission critical applications leveraging XSLT, I feel for you. I won't rehash my thoughts on either side of that choice, however, if you think you want to, I think you should be able to count on the web platform remaining a platform instead of just being whatever some Googler thinks being a platform means that week.
In any case, all the more reason I'm bullish on a project like Ladybird. With fewer than seven competitors it's easy to get consensus on what you'll impose on the rest of society. Not only that but here you've got two of the three on the take of the third. Monopolization is really good at completely divorcing you from the problems you cause for everyday people.
Just a fun anecdote filled piece all about how to build good user interfaces. Excellent read. Well worth your time, even if you're versed in the subject.
To recap the highlights:
A user interface is well-designed when the program behaves exactly how the user thought it would.
Every time you provide an option, you're asking the user to make a decision.
Users don't read the manual.
Users can't control the mouse very well.
These are mostly reformulations of well known UI design guidelines:
Jakob's Law: Users spend most of their time on other sites.
This one's unique. Partly not interrupting a user. Partly not abdicating the design part of design. Partly not adding controls only a few people want.
Paradox of the Active User (Carroll & Rosson, '87).
Fitts's Law: The time to acquire a target is a function of the distance to—and size of—the target.
Sure this uses many more words for less than is covered by the Laws of UX website. The stories and examples are just fun to read though. Having vivid stories to remember can significantly improve how many of these abstract concepts you remember when you're working on UI.
This is the conference talk behind The Documentation System. This is my all time favourite framework for organizing documentation on a project. It builds a little 2x2 matrix based on practical versus theoretical and studying versus working to categorize documentation. Really helps provide clarity on what you're writing and where it should go.
There's often times to break with this strict breakdown. For example I love having a dedicated page to talk about the project explaining what it is and why you should care at the top of the documentation and outside this structure. FAQs also tend to sit outside just to be really easy to find. It's fine to break with this structure when it makes sense, but having this as the backbone of the documentation has really improved splitting documentation up. To try and stop having stream of conciousness dumping documents.
I would strongly suggest not throwing everything out and starting again. Just start by adding this structure to your project and link (don't move) the existing documents that already fit this structure. That's already a lot of work and doesn't destroy what you have before you've fully built the replacement. Next, you need to build buy-in. If people are going to keep writing documents the old way (usually randomly in wherever they think makes sense) then you'll never converge. Getting buy-in from the team about this system means all new documentation will be built starting from these viewpoints. Even if the folder structure is later abandoned, just having these use cases in mind when writing documentation prevents making information soup.
Once all that's done you can then begin refactoring documents as you update them, splitting them into new documents inside each of the relevant sections. Over time, if you have analytics, you should start to see more and more people reading documents from the new sections and less from the old. Whenever you see a document outside this system being used often, refactor it into the system. If you don't have analytics on your docs, you probably should. It's invaluable to help find what documentation is having a real impact and what documentation isn't.