I had an interview question. What would you do if two different people were emailing a spreadsheet back and forth to track something?
I said I’d move them to google sheets. There was about five minutes of awkwardness after that as I was interviewing for software developer. I was supposed to talk about what kind of tool I’d build.
I found it kind of eye opening but I’m still not sure what the right lesson to learn was.
So my cofounder was talking to Stripe about an acquihire (this was after I’d left.) As part of it, he had to do a systems design interview.
He got the prompt, asked questions about throughput requirements (etc.), and said, “okay, I’d put it all in Postgres.” He was correct! Postgres could more than handle the load.
He gets a call from Patrick Collison saying that he failed the interview and asking what happened. He explained himself, to which Patrick said, okay, well yes you might be right but you also understand what the point of the interview is.
If the point of the interview was to test if the candidate can design something that can handle google-scale problems then maybe the interviewer shouldn't state throughput and availability requirements that can be satisfied by postgres
Postgres might have been a perfect answer, but the candidate needs to explain why and how.
The purpose of the interview is for the candidate to demonstrate their thought process and ability to communicate it. “Just use Postgres” doesn’t do that.
This would be more obvious if it was a LeetCode problem and the candidate just regurgitated an algorithm from memory without explaining anything about it. Yeah it’s technically the right answer but the interviewer can’t tell if you actually understand what you’re talking about or if you just happened to memorize and answer that works.
Interviews are about communication and demonstrating thought process.
100% interviews are about communication and demonstrating thought process; after going through some rounds of interviewing candidates myself, any candidate who can adequately explain what they're thinking and how they arrive at their conclusions will be able to demonstrate their skills much more thoroughly than 'just use Postgres'.
That being said, it's also on the ones giving the interviews to push the candidates and ensure that they really are receiving the applicants best. The interviewers don't want to miss potentially great candidates (interviews are hard and nerve-wracking, and engineers aren't known for their social performance), and thus sometimes need to help nudge the candidates in the right direction.
I went to law school and a few of us students were engineers. For our first set of essay exams, the professors all instructed us to "just answer the legal question" and not include extra analysis. After the exam, many of the engineers didn't do well because the professors *actually* wanted you to weave the whole sylabus into your answer (i.e., discuss hypotheticals that were not actually part of the question asked), not just answer the question. After that, we were fine.
I feel like if that's the thought process, that should be stated up front
There's a ton of incredibly talented neurodivergent people in our ecosystem who would trip up on that question just because of how it's framed
Because how is the interviewee to know if you're testing for the technically sophisticated answer no one in their right mind would ever write or the pragmatic one?
I dont even think you need to be neurodivergent or anything to answer this question like the parent’s cofounder did.
From one side, we call ourselves problem solvers, on the other hand we are not satisfied with simple solutions to these problems.
If im interviewing for a job, i should be expected to behave and solve hypothetical problems the way id do it on the job. If that screws up your script, you probably suck at hiring and communicating your expectations.
It's probably more about your mindset, than about being neurodivergent vs. neurotypical. If you care more about maintainability and operations, there's a whole host of solutions you'd never built.
Or just add a couple zeros to all the requirements until postgres is a worse solution than whatever the interviewer envisions. Isn't that the point of stating throughput requirements?
If they don’t want to hear the correct answer, let them modify the question to exclude postgres answers. Interviews are a 2 way street, you will miss out on great candidates by being this stupid.
I'd assume that if he got a call from Patrick himself and a second opportunity to get interviewed, that's already a cue for interviewers to pass him regardless of what he says?
No, it was a perfectly fine question IMHO. it is a broken incentive - it is expected that you design complex systems regardless whether they are useful or not. Try to interview for the role you have to fill, nor for a role you a dreaming you would love to have whenever you're Google2.
If the interview wants you to think about stuff that never happens in your role, I think it is a sign that in your role, you're expected to solve the problems like in the interview.
> He got the prompt, asked questions about throughput requirements (etc.), and said, “okay, I’d put it all in Postgres.” He was correct! Postgres could more than handle the load.
I had this happen in a Google interview. I did back of the envelope math on data size and request volume and everything (4 million daily events, spread across a similar number of buckets) and very little was required beyond that to meet performance, reliability, and time/space complexity requirements. Most of the interview was the interviewer asking "what about" questions, me explaining how the simple design handled that, and the interviewer agreeing. I passed, but with "leans" vs. "strong" feedback.
I always find it funny that "engineers" straight out of school who barely know how to create a PR are expected to "ace" planet scale design questions. It's. Just. So. Dumb.
I would hire the "just use postgres" dude in a heartbeat without re-testing, if the numbers made sense, and perhaps give a stern talking-to to the interviewers. But then again I'm not a unicorn founder, so what do I know.
Eh, it's a good answer and shows good instincts, but they still want to know how he would design a system if one was necessary. There's no need to be ridiculous about any of this from either perspective, which is why it should never have been a "fail" without the original interviewer simply saying "That's a solid answer now tell me what you would do if you had to build something new". I mean look how much time he wasted for everyone including his own CEO by being stubborn about it.
If the numbers can be satisfied by a Postgres then thats the correct answer. The interviewers fucked up, because they sized the problem wrongly.
This is the same issue that was prevalent when the industry switched from HDD to SSD: some system design questions suddenly became trivial, because the IOPS went up by magnitudes. This is not a failure of the interviewees, who correctly went with the times, but a failure of the interviewers.
The point is the interviewers are sometimes obtuse.
Sometimes the point of the interview is to see if the candidate knows an existing solution and "just use postgres" is the good answer. Sometimes it's to test the technical chops and pretending postgres doesn't exist is the point.
The candidate shouldn't be expected to always guess right, unless the position says "a psychic". The interviewer should notice if the candidate is solving the wrong kind of problem and nudge them in the right direction instead of punishing the candidate for the wrong guess.
In an interview you need to explain your thought process and demonstrate that you’re making an informed decision with supporting evidence.
Saying “just use Postgres” and then providing no explanation for why you believe Postgres is sufficient for the job, a general explanation of what type of hardware and architecture you’d use, and other details is not answering the question.
I realized that my manager really confuses complexity with robustness. Case in point: we have a very complicated script that runs at deployment time to determine the address of the database. It's been the source of a few incidents - database wasn't discovered because it was restarting and the script passed an empty string instead of stopping the deployment, script failed because of python update and empty configuration was passed, shit like that. I've been arguing "bro why can't we make terraform create a config file with all the addresses that is directly passed to the app at deployment, or better yet, just copy-paste the database addresses into a file in the repo because we change something there once a year at maximum" but my manager took it as a sign of incompetence and my inability to understand complex systems.
I feel like lots of people just follow the happy path and don't understand that complexity incurs real cost.
It's crazy the original interviewer allowed it to come to this which sounds like a waste of time for everyone involved, instead of simply saying; "Very good that is a legitimate solution to the problem. Now let's pretend you have to build something new, what would you do?"
Why on Earth did the company have to be so willingly obtuse and stupid about it including what sounds like the CEO (well at least he gave him another shot, but there doesn't need to be implicit assumptions about the "point of the interview", just come out and address it head on explicitly.)
> What would you do if two different people were emailing a spreadsheet back and forth to track something?
I realize this is part of an interview game, but perhaps the best response is still to ask why this is a problem in the first place before you launch into a technical masterpiece. Force the counterparty to provide specific (contrived) reasons that this practice is not working for the business. Then, go about it like you otherwise would have.
I actually really prefer your answer. I would likely counter with, “what potential issues could you see with doing things this way?” But a) you’ve shown me that you don’t charge into solutions without first attempting to define the problems, b) your follow-up answer reveals to me what kind of things you think are important, and c) I’d probably quickly ask something like,”let’s assume that in the past, we’ve had issues with missing changes when emailing this back and forth,” and encourage some more dialogue.
I do dislike interviews where a candidate can fail simply by not giving a specific, pre-canned answer. It suggests a work culture that is very top-down and one that isn’t particularly interested in actually getting to the truth.
Not an engineer but reminds me of a similar situation I've seen interviewing
Sometimes we'll ask market sizing questions. We will say it's a case question, it's to see their thought process, they're supposed to ask questions, state assumptions, etc.
Occasionally we'd get a candidate that just doesn't get it. They respond "oh I'd Google that". And I'll coach them back but they just can't get past that. It's about seeing how you approach problems when you can't just Google the answer, and we use general topics that are easily accessible to do so. But the downside is yes, you can google the answer.
The followup questions usually help, like: "What are they tracking?" and "What are the problems caused by using a spreadsheet?" That usually gives you a clue of the answer they're looking for. The answer might be bullshit, but you pass an interview by playing their game, not yours.
It really depends on how much time is spent filling out the spreadsheet.
If they are collectively spending 1hr/mo on the spreadsheet then it’s not worth an SWE’s time to optimize it. If they are spending 4hr/day on the spreadsheet then it’s a prime candidate for automation.
Way back when I was in IT Admin, I used to have this problem all the time. Some non-tech person emails a spreadsheet, another non-tech person edits it, and saves it. The original person complains that they can't see the changes. Yeah, because it's saved in some MS Windows Profile location that no sane human would ever visit. My solution was to ONLY email links to shared files on a shared resource. The LAST thing I'd ever think of is writing software to solve this problem!
These days if I were interviewing someone and they said, "I'd use the simple solution that is fairly ubiquitous", I'd say, "yes! you've now saved yourself tons of engineering hours - and you've saved the company eng money".
I attach a copy of the file and then provide a network location for where it is located. Makes it easy for people to just open up a simple copy to look at it and they know where to go to access the original.
I feel like you could start waxing poetic about engineering value of meeting people where they are, not reinventing the wheel, etc.
Then after a brief discussion of that you could actually ask if the purpose of the question was for you to design a system to handle that situation and jump into the design.
Yours was a clever answer to a stupid question. Tech interviewers need to leave college behind and start treating candidates as professionals. Puzzles, white boarding and riddles are unique to software engineering roles, you would never see a lawyer, an accountant, a doctor, or engineers in other disciplines going through any of this nonsense. These methods are proven to be a poor predictor of job performance. In my last role as lead engineer we would chat with the candidate over lunch about random topics. We first wanted to see if they would fit our team. Then in the afternoon let them work in a little project that was actually part of active development. This way we discovered that most candidates who went through the screening process could actually be pretty good team members. Our issue was having to decide who to give the offer to, while other companies keep rejecting candidates over bubble sort. Our attrition was also pretty low. So it happens that software engineers will surprise you when you treat them as grown ass adults. Who would have guessed?
Honestly, if I'd have heard that, I'd hire you in a heartbeat, you solved the problem without increasing total cost of ownership to the company and meant we could move forwards
I'd actually trust you to take on harder problems
Doesn't really matter what the situation is, there's much more that can be achieved in my book with that kind of mindset :)
I'm also of the opinion that in an increasingly LLM software written world, being able to have this kind of mindset will actually be really valuable
The lesson to learn is that in-house development groups are often incentivized to “sell” custom software solutions to their organizations, as their existence (budget) relies on it.
As an interviewee it’s important to try and identify whether the group you’re interviewing with operates this way, literally: How will they get the money to pay for your salary? That way you avoid giving nom-starter answers to interview questions.
depends on what metric it is that _you_ want to optimize. i would have given the same answer, then aikidoed their confusion into some quick lecture on efficiency of software solutions in a business context and finally a segway into a project i worked on (or made up on a whim) of related relevance that i assume would be more interesting to talk about instead. but given my rather unimpressive career i'd suggest to not listen to me.
This is one of my favourite interview questions too. I ask a design question that technically could be solved using the specialist skillset I interview for but it would be insane to actually do that in the real world. It's a good opener to see how practical and open minded they really are.
Is it? I have long thought that most things business people are using a spreadsheet for belongs someplace else. They are easy ways to run quick what-ifs or make lists, but generally the right answer is update the system so they don't need a spreadsheet. If the data is financials - why can't your accounting system give everyone the view they need from the shared system? Othertimes what they really need are a database to track this. But a spreadsheet is easy and so they ignore all the problems it creates because it needs a real engineer (and often more money than they can spend) to create the right solution.
It's a culture fit question. When the culture is 'make everything ourselves' you're not a great culture fit. When the culture is 'just solve the problem', you fit in perfectly well.
I mean you gave the right answer imho. Software engineers are just business people whose main tool is coding. You know you're good if you don't reach for the hammer when you don't have a nail.
I mean that's not really a good answer because you need to ask why they are sending a spreadsheet back and forth. Like yeah using a shared spreadsheet saves them having to email back and forth, but it doesn't help formalize the process, include validations, or make the data available to other systems. And maybe it would turn out that none of that is actually helpful in this case, but if you don't make an effort to ask and understand what's going on then you won't know if there's some bigger problem that needs addressing.
> At least from the point of view of the interviewer, this was the point where they should give you a polite "hey, play along" nudge.
That may be the game, but we all know it's bullshit, and we shouldn't be playing along.
If a member of my team actually proposed building a bespoke system for something that can be straightforwardly done in a spreadsheet, we'd be having some conversations about ongoing maintenance costs of software
> If a member of my team actually proposed building a bespoke system for something that can be straightforwardly done in a spreadsheet, we'd be having some conversations about ongoing maintenance costs of software
All interviews are contrived / artificial situations: The point is to understand the candidate's thought processes. Furthermore, we're getting Bilsbie's (op) take on the situation, there may be context that the interviewer forgot to mention or otherwise Bilsbie didn't understand / remember.
Specifically, if (the hypothetical situation) is a critical business process that they need an audit log of; or that they want to scale, this becomes an exercise in demonstrating that the candidate knows how to collect requirements and turn a manual process into a business application.
The interviewer could also be trying to probe knowledge of event processing, ect, ect, and maybe came up with a bad question. We just don't know.
Given that Bilsbie can't read their interviewer's mind, there's no way to know if that's what the interviewer wanted, or if the interviewer themselves was bad at interviewing candidates.
AI coding tools are making this problem worse in a subtle way. When an agent can generate a "scalable event-driven architecture" in 5 minutes, the build cost of complexity drops to near zero. But the maintenance cost doesn't.
So now you get Engineer B's output even faster, with even more impressive-sounding abstractions, and the promotion packet writes itself in minutes too. Meanwhile the actual cost - debugging, onboarding, incident response at 3am - stays exactly the same or gets worse, because now nobody fully understands what was generated.
The real test for simplicity has always been: can the next person who touches this code understand it without asking you? AI-generated complexity fails that test spectacularly.
To be fair, a lot of the on call people being pulled in at 3am before LLMs existed didn't understand the systems they were supporting very well, either. This will definitely make it worse, though.
I think part of charting a safe career path now involves evaluating how strong any given org's culture of understanding the code and stack is. I definitely do not ever want to be in a position again where no one in the whole place knows how something works while the higher-ups are having a meltdown because something critical broke.
True, but I think the implication (as I read it) is that AI may be providing more complex solutions than were needed for the problem and perhaps more complex than a human engineer would have provided.
This. I've been a sysadmin for a quarter of a century and have professionally written next to no software. I've debugged every system I've had to support at some point though. It's a very different skill set.
This is something I keep thinking while coding with AI, and same with introducing library dependencies for the simplest problems. It’s not whether how quickly I can get there but more about how can I keep it simple to maintain not only for myself but for the next AI agent.
Biggest problem is that next person is me 6 months later :) but even when it’s not a next person problem how much of the design I can just keep in my mind at a given time, ironically AI has the exact same problem aka context window
I think we'll see a decline of software as a product for this reason. If your job is to solve a problem, and you use AI to generate a tool that solves that problem, or you use money to buy a tool that solves that problem, well then it's still your job to solve that problem regardless of which tool you use.
But given how poorly bought software tends to fit the use case of the person it was bought for... eventually generate-something-custom will start making more and more sense.
If you end up generating something that nobody understands, then when you quit and get a new job, somebody else will probably use your project as context for generating something that suits the way they want to solve that problem. Time will have passed, so the needs will have changed, they'll end up with something different. They'll also only partially understand it, but the gaps will be in different places this time around. Overall I think it'll be an improvement because there will be less distance (both in time and along the social graph) between the software's user its creator--them being most of the time the same person.
I wrote something similar in a Claude Code instructions.md: "minimize cyclomatic complexity" What happened next? It generated an 8 line wrapper function called only once from a different file. So, I told it to inline that logic in the caller. The result? One. Line. Of. Code.
So, I asked it to modify its instructions.md file to not repeat that mistake. The result was the new line "Avoid single-use wrapper functions; inline logic at the call site unless reused"
I wrote something similar in a Claude Code instructions.md: "minimize cyclomatic complexity" What happened next? It generated an 8 line wrapper function called only once from a different file. So, I told it to inline that logic in the caller. The result? One. Line. Of. Code.
So, I asked it to modify its instructions.md file to not repeat that mistake. The result was the new line "Avoid single-use wrapper functions; inline logic at the call site unless reused"
Not sure if you're kidding or not, but to write great maintable code, you need a lot of understanding that a LLM just doesn't have, like history, business context, company culture etc. Also, I doubt that in it's training data it has a lot of good examples of great maintainable code to pull from.
This says more about you and the people you work with. I find engineers that have been at the company for a while are quite invaluable when it comes to this information, it's not just knowing the how but the when + why that's critical as well.
Acting like people can't be good at their job is frankly dehumanizing and says a lot about your mindset with how you view other fellow devs.
Admitting you've spent two decades on a career stuck working in the kind of sweatshops that hire people who can't actually code isn't much of a flex, and certainly doesn't lend a whole lot of credence to your argument.
Sometimes, as in the bilsbi's top level comment, the solution is to use a free tool/library/product that already exists. The solution is not always to write new code, but the agent will happily do it.
Maybe that's "the manager's job", but that's just passing the buck and getting a worse solution. Every level of management should be looking for the best solution.
It’s not even about perfectionism. Code’s value is about processing data. Bad code do it wrongly and if you have strange code on top of that, you cannot correct the course. Happy path are usually the low hanging fruits. What makes developing software hard is thinking about all the failure and edge cases.
I worked at Amazon 2005-2008 as a Software Dev Engineer. To hammer home company culture, there were two awards which could be awarded at the Quarterly All-hands meeting
* The "Just Do It" Award which recognized someone just fixing some obvious problem at was in front of them but not responsibility
* The "Door Desk" Award for frugality, named in honour of the basic door-frame-four-leg desk everyone worked on.
In many ways, the Door Desk award was for simplicity. I remember, one time, someone got an award for getting rid of some dumb operations room with some big unused LCD TVs. When you won these awards, you rarely got any kind of reward. It was just acknowledgement at the meeting. But that time, they literally gave the guy the TVs.
Except that of course, the "simple" Door Desk was actually more expensive than the equivalent from Ikea, had no real additional functionality and took more time to put together. Which somewhat muddies the metaphor ...
In full-time employment this is sad but true. There is a way out of this toxic loop however.
As a consultant/contractor I always evangelise simplification and modelling problems from first principles. I jump between companies every 6-12 months, cleaning up after years of complexity-driven development, or outright designing robust systems that anybody (not just the author) can maintain and extend.
This level of honesty helps you build a reputation. I am never short for work. I also bill more than I could ever as a full-time engineer based in Europe.
Really you can. You look at the engineers who create steaming piles, and you look at the ones who don't. Over a year or two, the difference is easy to spot. For people who care to spot it.
If there's no competent front-line technical management who can successfully make this simple comparison, then, sure, in that case the team may be fucked.
It's easy to gloss over this assessment but ultimately this needs to be a key decision point for where you choose to work. No matter how well you manage complexity as an IC or a lower tier leader, if your upper tier of leaders don't value it, it won't last. Simplicity IME is not a "tail that wags the dog" concept. It's too easy to stomp out if nobody in power cares.
Yes, I should have added "...this way" because I meant that to address GP's claim of the metric-based numerical measurement.
In general, I agree that you can and should judge (not necessarily measure) thing like simplicity and good design. The problem is that business does want the "increased this by 80%, decreased that by 27%" stuff and simplicity does not yield itself to this approach.
I think this is often true and it's the limiting factor that prevents complexity from spiraling out of control. But there's also a certain type of engineer who generates Rube Goldberg code that actually works, not robustly, but well enough. A mad genius high on intelligence and low on wisdom, let's say. This is who can spin complexity into self reward.
Yes, and ironically there are promotion ladders that explicitly call out "staff engineers identify problems before they become an issue". But we all know that in reality no manager-leader is ever going to fix problems eagerly, if they even agree with someone's prediction that that thing is really going to become a problem.
I once used the analogy of the PM coming to the shop with a car that had a barely running engine and broken windows, and he's only letting me fix the windows.
His response: "I can sell a good looking car and then charge them for a better running engine"...
> "Reduced incidents by 80%", "Decreased costs by 40%", "Increased performance by 33% while decreasing server footprint by 25%"
My experience is no one really gets promoted/rewarded for these types of things or at least not beyond an initial one-off pat on the back. All anyone cares about is feature release velocity.
If it's even possible to reduce incidents by 80% then either your org had a very high tolerance for basically daily issues which you've now reduced to weekly, or they were already infrequent enough that 80% less takes you from 4/year to 1/year.. which is imperceptible to management and users.
> All anyone cares about is feature release velocity.
And at the same time it's impossible to convince tech illiterate people that reducing complexity likely increases velocity.
Seemingly we only get budget to add, never to remove. Also for silver bullets, if Big Tech promises a [thing] you can pay for that magically resolves all your issues, management seems enchanted and throws money at it.
You can reduce a single type of incident by 80%. The overall incident rate for this particular type wasn't high enough to kill your company, but it's still a big number on your promotion packet.
The "time to market" folks finally have everything they could hope for, let's see all of that business value they claim is being missed due to pesky things like security, quality, and scalability checks.
One of our interviews is a technical design question that asks the candidate to design a web-based system for public libraries. It explicitly tests for how simple they can keep it, starting at "a single small town library" scale and then changing the requirements to "every library in the country". The top ever performance was someone who answered that by estimating that even at max theoretical scale, all you need a medium sized server and Postgres.
I have 100% failed interviews by giving that answer when their definition of scale was 10,000!!!! req/sec. Like sorry dude in 2026 that's not much different than 10 req/sec and my original design would work just fine... But that's what happens when your interviewer is a "senior" 24 year old just reading off the prompt.
Well, it depends what those requests are doing surely? I always thought it was weird to treat "request" as a unit of measurement. Are you requesting a static help page, or a GraphQL search query?
Most people forget that the early web was built in server closets on-site handling hundreds of requests per second. The business was sold hyperscalers because devs wanted more servers and were tired of arguing WHY they wanted more servers. Then they got sold on Highly Available services because every second you're down is a dollar, or more, lost. Nobody mentioned that the cost of building and maintaining it costs more than the money you'd lose except for the largest of organizations.
Don't even get me started on the resume-driven development that came along with it.
And maybe I'm completely wrong. This is a perspective of one.
Honestly I think that the real result of this is developers that don't really understand the underlying tooling and invent all sorts of bad architectures.
One common example I cite is at one job I owned Kafka and RabbitMQ clusters. Zero consideration was given to message size recommendations and we had incidents on the regular because some application was shoving multi-hundred megabyte messages into RMQ. They'd do other stupid shit like not ack their messages which would cause them to never be removed from local disk. This was a huge org, public company, hiring "only the best and brightest".
Management endlessly just threw more hardware at it rather than make the engineers fix their obviously bad architecture. What a headache. Some companies take the "prioritize engineer happiness" thing right off a cliff.
I forget who said it, but it seems that AI is basically an amplifier of the talents (or lack of them) of whoever is wielding the tool.
In the hands of an experienced developer/designer, AI will help them achieve a good result faster.
In the hands of someone inexperienced, out of their depth, AI will just help them create a mess faster, and without the skill to assess what's been generated they may not even know it.
I am the type of engineer who prefers simplicity and I have not found a way to make AI increase the simplicity of code I'm working on. If left to its own devices, Claude absolutely loves adding more member variables, wrapper functions, type conversions, rather than, say, analyzing and eliminating redundancies. So my experience is that AI is more closely aligned with the engineer type for whom the solution is always "add more code", rather than whatever its human manager would do.
I agree, it just sucks at understanding style and simplicity.
It's good at code generation, feature wise, it can scaffold and cobble together shit, but when it comes down to code structure, architecture, you essentially have to argue with it over what is better, and it just doesn't seem to get it, at which point it's better to just take the reins and do it yourself. If there's any code smells in your code already, it will just repeat them. Sometimes it will just output shit that's overtly confusing for no reason.
I very much agree with that, I had the same thought a few days ago.
I feel/am way more productive using chatgpt codex and it especially helps me getting stuff done I didn't want to get started with before. But the amount of literal slop where people post about their new vim plugin that's entirely vibecoded without any in-depth thinking about the problem domain etc. is a horrible trend.
That's an interesting question ... how should a less experienced developer use AI productively, and learn while developing? Certainly using it as a magic genie and vibe coding something you are in no position to evaluate is not the way to go, nor is that a good way for anyone to use AI if you care about the quality or specifics of the end result!
There's always going to be some overlap, wanting to use a new skill/library in a production system, but maybe in general it's best to think of learning and writing/generating production code as two separate things. AI is great for learning and exploration, but you don't want to be submitting your experiments as PRs!
A good rule of thumb might be can you explain any AI-generated design and code as well as if you had written it yourself? If you don't fully understand it, then you are not in a good position to own it and take responsibility for it (bugs, performance, edge case behavior, ease of debugging, flexibility for future enhancement, etc).
Long rant, but the author never defines what he means by "simple". He heavily hints at smaller changeset == simpler.
Too often the smallest changeset is, yes, simple, but totally unaware of the surrounding context, breaks expectations and conventions, causes race conditions, etc.
The good bit in tfa is near the end:
> when someone asks “shouldn’t we future-proof this?”, don’t just cave and go add layers. Try: “Here’s what it would take to add that later if we need it, and here’s what it costs us to add it now. I think we wait.” You’re not pushing back, but showing you’ve done your homework. You considered the complexity and chose not to take it on.
I think Rich Hickey's talk about simple is great for defining these terms (literally). He describes how the roots of "simplex" mean single braid, which compares to the twisting & coupling with complexity; an apt visual for software development. He also differentiates simple/complex from easy/hard, which is important.
The answer to this is almost always "NO" in my experience, because no one ever actually has good suggestions when it comes up. It's never "should we choose a scalable compute/database platform?" It's always "should we build a complex abstraction layer in case we want to use multiple blob storage systems that will only contain the lowest common denominator of features of both AND require constant maintenance AND have weird bugs and performance issues because I think I'm smarter than AWS/Google AND ALSO we have no plans to actually DO that?"
The name of the game is framing. You don't talk about simplicity, because most people don't really understand what simplicity is. They falsely equate it to easy.
Instead you talk about how you complete all your tasks and have so much bandwidth remaining compared to all your peers, the beneficial results of simplicity. Being severely under used while demonstrating the ability to do 2x-10x more work than everybody else is what gets you promoted.
In this vein simplicity is like hard work. Nobody gives a shit about hard work either. Actually, if all you have to show is how hard you work you are a liability. Instead its all about how little you work provided and that you accomplish the same, or more, than everybody else.
This has been a thought theme throughout my career and have a good set of scenarios I never ended up publishing.
It's not just the most "elaborate system". The same thing happens in so many other ways. For example a good/simple solution is one and done. Whereas a complex one will be an interminable cause of indirect issued down the road. With the second engineer being the one fixing them.
Then there's another pattern of the 10x (not the case with all 10x-ers) seeding or asked to "fix" other projects, then moving on to the next, leaving all the debt to the team.
It's really an amazing dynamic that can be studied from a game theoretical perspective. It's perhaps one of the adjacent behaviors that support the Gervais principle.
It's also likely going to be over soon, now that AI is normalizing a lot of this work.
I built a showback model at a prior org. Re-used shelfware for the POC, did the research on granular costs for storage, compute, real estate, electricity, HVAC maintenance, hardware amortization, the whole deal. Could tell you down to the penny how much a given estate cost on-prem.
Simple. Elegant. $0 in spend to get running in production, modest spend to expand into public cloud (licensing, mainly). Went absolutely nowhere.
Got RIFed. On the way out the door, I hear a whole-ass team did the same thing, using additional budget, with lower confidence results. The biggest difference of all? My model gave you the actual cost in local currency, theirs gave you an imagined score.
The complexity (cost of a team, unnecessary scoring) was rewarded, not the simplicity.
"Perfection is achieved, not when there is nothing more to add, but when there is nothing left to take away." Antoine de Saint-Exupéry.
It's hard to keep things simple. Management should be mindful of that and encourage engineers to follow YAGNI, to do refactorings that remove more code than they add, etc.
One more opinion piece uselessly recommending "simplicity" with no code samples or concrete takeaways.
> It also shows up in design reviews. An engineer proposes a clean, simple approach and gets hit with “shouldn’t we future-proof this?” So they go back and add layers they don’t need yet, abstractions for problems that might never materialize, flexibility for requirements nobody has asked for. Not because the problem demanded it, but because the room expected it.
$100 says the "clean, simple" approach is the one which directly couples the frontend to the backend to the database. Dependencies follow the control flow exactly, so that if you want to test the frontend, you must have the backend running. If you want to test the backend, you must have the database running.
The "abstractions for problems that might never materialize" are your harnesses for running real business logic under unit-test conditions, that is, instantly and deterministically.
If you do the "simple" thing now, and push away pesky "future-proofing" like architecting for testing, then "I will test this" becomes "I will test this later" becomes "You can't test this" becomes "You shouldn't test this."
Dijkstra understood it 50 years ago, and again 26 years ago [1]. Nothing changes. Malpractice just propagate and there are zero incentives to build simple, small, and maintainable software.
If the company you work for just push for unnecessary complexity, get out of there! Don't fold!
> If the company you work for just push for unnecessary complexity, get out of there!
Why? We learn all these cool patterns and techniques to address existing complexity. We get to fight TRexes… and so we get paid good money (compared to other jobs). No one is gonna pay me 120K in europe to build simple stuff that can work in a single sqlite db with a php fronted.
Except now we get websites that need to download 20-25MB of "latest cool framework" to show you a blurb of text because programmers before you created unnecessary complexity that needs to be maintained forever.
The honest opinion no one wants to hear is that programmers do not deserve the money they are paid for because MOST of the time what it's really needed is a "single sqlite db with a php frontend".
This is not entirely true. In an environment driven by business stakeholders, the engineer who ships features quickly, and that break rarely in production, will be greatly appreciated.
The engineer who takes weeks to over-engineer a simple feature, which then runs into unexpected side issues in production, much less so.
The environment where the over-engineer tends to be promoted is one where the engineering department is (too) far separated from where the end users are. Think of very large organizations with walled departments, or organizations where there simply is not enough to do so engineers start to build stuff to fight non existing issues.
This was my exact thought process reading this. The business side of my company does not care or want to wait for complex solutions that sound cool to engineers. If anything, we have the opposite problem: convincing business stakeholders when complexity is in fact warranted.
There are two ways of constructing software: one way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult.
This resonates so hard with me. I was self-employed for over eight years, since I was the one who had to deal with all messes I always made sure that things were as simple as sensible (but not simpler). I made a good career out of it. Then I went back to being employed by a company, and I was completely befuddled but the over-complication of designs. Engineers aren't trying to help the business or solve a problem, they're trying to prove how good they are. It's just the completely wrong set of incentives. If you only get promoted by solving complex cases and there are no complex cases to solve, then you'll make them up.
Actually I have seen successful promotion packets based on elimination of complexity. When maintenance of a complex system becomes such a burden that even a director is aware of it, "eliminating toil" is a staff level skill.
More than once I have seen the same project yield two separate promotions, for creating it and deleting it. In particular this happens when the timescale of projects is longer than a single engineer's tenure on a given team.
But yes, avoiding complexity is rarely rewarded. The only way in which it helps you get promoted is that each simple thing takes less time and breaks less often, so you actually get more done.
The problem can be complex, which sometimes means the solution needs to be complex. Often, you can solve a complex problem in simple ways. There’s many ways to do that:
a) finding a new theoretical frame that simplifies the space or solutions, helps people think through it in a principled way
b) finding ways to use existing abstractions, that others may not have been able to find
c) using non-technical levers, like working at the org/business/UX level to cut scope, simplify requirements,
The way you can make complexity work for your career, is to make it clear why the problem is complex, and then what you did to simplify the problem. If you just present a simple solution, no one will get it. It’s like “showing your work”.
In some orgs, this is hopeless, as they really do reward complexity of implementation.
> In design reviews, when someone asks “shouldn’t we future-proof this?”, don’t just cave and go add layers.
In fact, simplicity often is the best future-proofing. Complex designs come with maintenance costs, so simple designs are inherently more robust to reorgs, shifted priorities, and team downsizing.
> Now, promotion time comes around. Engineer B’s work practically writes itself into a promotion packet: “Designed and implemented a scalable event-driven architecture, introduced a reusable abstraction layer adopted by multiple teams, and built a configuration framework enabling future extensibility.” That practically screams Staff+.
> But for Engineer A’s work, there’s almost nothing to say. “Implemented feature X.” Three words. Her work was better. But it’s invisible because of how simple she made it look. You can’t write a compelling narrative about the thing you didn’t build. Nobody gets promoted for the complexity they avoided.
Well, Engineer A's manager should help her writing a better version of her output. It's not easy, but it's their work. And if this simpler solution was actually better for the company, it should be highlighted how in terms that make sense for the business. I might be naive and too optimistic but good engineers with decent enough managers will stand out in the long run. That doesn't exclude that a few "bad" engineers can game their way up at the same time, even in functional organizations. though.
There's a significant asymmetry though, it's not just a bit more work. I'm a bit cynical here, but often it's easier to just overengineer and be safe than to defend a simple solution; obviously depending on the organization and its culture.
When you have a complex solution and an alternative is stacked up against it, everything usually boils down to a few tradeoffs. The simple solution is generally the one with the most tradeoffs to explain: why no HA, why no queuing, why no horizontally scalable, why no persistence, why no redundancy, why no retry, etc. Obviously not all of them will apply, but also obviously the optics of the extensive questioning will hinder any promotion even if you successfully justify everything.
> And if this simpler solution was actually better for the company, it should be highlighted[…]
Simpler than what? The reason this phenomenon is so pervasive in the first place is that people can’t know the alternatives. To a bystander (ie managers), a complex solution is proof of a complex problem. And a simple solution, well anyone could have done that! Right?
If we want to reward simplicity we have to switch reference frame from output (the solution), to input (the problem).
I'm (also) an EM, I've been a pure EM in some roles in my career and I really struggle to understand these pain points that many people bring up. Isn't a manager job to know what their managees are focused on over a period of time? Shouldn't be they aware of the projects the team is working on? As EM and most probably previously engineers, shouldn't they know already why simple solutions are good?
Maybe I am naive but I still believe that simplicity leads to personal wins in the long run. Having simpler system designs lead to velocity and eventually you become known as the "team that can deliver".
In larger systems, what looks like “overengineering” can be deliberate risk management. In my experience, senior engineers do get promoted for simplicity but only when they can articulate the trade-offs and the future costs they are avoiding.
I'm not sure I agree wit this, if I have work that needs to be done, and have a vague idea how long it should take
The engineer that consistently quotes 3x my expectation (and ends up taking 4x with bugs) is going to look way worse than the engineer that ships things quickly with no drama.
You need the tension between both, or else either approach at most levels of systems, whether its an app or a corporation, tends to lead to toxic failures modes.
It could be something overbuilt, large organization structures. Brittle solutions that are highly performant until they break. Or products/offerings that don't grow for similar reasons, simpler-is-better, don't compete with yourself. Or those that grow the wrong way-- too many, much to manage, frailty through complexity, sku confusion.
Alternatively, things that are allowed to grow with some leeway, some caution, and then pruned back.
There's failure modes in any of these but the one I see most often is overreaching concern for any single one.
People always overcomplicate this. Companies want to get the most out of their employees, for the least amount of money paid.
Promotions are supposed to incentivise people to stay, rather than leave. If the company never promoted anyone, people would leave. So there needs to be a path for promoting people. But that process doesn’t have to be transparent, or consistent, or fair - in-fact it rarely is.
You promote people who consistently overdeliver, on time, at or below cost, who are a pleasure to work with, who would benefit the company long term, who would be a pain to lose. A key precondition is that such people consistently get more done compared to other people with equal pay, otherwise, they don’t stand out and they are not promotion material.
What counts as overdelivering will vary based on specific circumstances. It’s a subjective metric. Are you involved with a highly visible project, or are you working on some BS nobody would miss if it got axed? Are you part of a small team, or are you in a bloated, saturated org? Are you the go-to person when shit hits the fan, or are you a nobody people don’t talk to? Are you consistent, or are you vague and unpredictable? Does your work impact any relevant bottom lines, or are you just part of a cost centre? It really isn’t rocket science, for the most part.
Numerous times I've seen promotions going to people who were visible but didn't do the actual work. Those who share the achievements on Slack, those who talk a lot, get to meetings with directors, those who try to present the work.
For the vast majority of people and cases, it really is that simple - but like I already said, "the process doesn’t have to be transparent, or consistent, or fair - in-fact it rarely is". There are exceptions to every rule, but for most people, it really does come down to some self reflection:
1. Do I consistently deliver more (in output, impact, or reliability) than peers at my pay level?
2. Is my work visible and tied to meaningful business outcomes, rather than low-impact tasks?
3. Am I known as dependable and easy to work with, especially under pressure?
4. Would the company feel a real loss-operationally or financially-if I left?
5. Have I made myself clearly more valuable to the organization than what I currently cost?
I keep reading this online but never encounter it in real life. People I work with and for like simple solutions that don't add complexity. It saves them time and money. I really wonder how is it that some people seem to encounter this toxic mentality so much that they assume it is universal. Is it a FAANG/US culture thing where everyone acts based on corrupted incentives?
The push for simplicity can't be at the time of recognition. It has to be during the building, so that by the time the thing gets built, it's the simplest thing that met the need.
Can you actually imagine a promo committee evaluating the technical choices? "Ok, this looks pretty complex. I feel like you could have just used a DB table, though? Denied."
Absolutely not! That discussion happens in RFCs, in architecture reviews, in hallway meetings, and a dozen other places.
If you want simplicity, you need to encourage mentorship, peer review, and meaningful participation in consensus-building. You need to reward team-first behavior and discourage lone-wolf brilliance. This is primarily a leadership job, but everybody contributes.
But that's harder than complaining that everything seems complicated.
>Can you actually imagine a promo committee evaluating the technical choices? "Ok, this looks pretty complex. I feel like you could have just used a DB table, though? Denied."
A committee with no skin in the game, who knows? But a manager who actually needs stuff done, absolutely.
> The interviewer is like: “What about scalability? What if you have ten million users?”
This reminded me of how much more fun I find it to work on software that always only has one user, and where scaling problems are related solely to the data and GUI interaction design.
Nyeah ... but people can get promoted for consistently shipping stuff that works, on time. And people can get sidelined for consistently taking 10x as much time to provide the same business value. That may not be the rule, and it may not always be obvious in the short term who is sidelined. But it can happen.
It can even happen that the tag "very smart" gets attached to those sidelined engineers. That's not necessarily a compliment.
I’ve definitely consistently seen people who can take a wildly complex bug-ridden Rube Goldberg machine that was impossible to change and break it down into a simple understandable system get promoted. These people are generally the best engineers in the org and a get reputation for that.
Any system that make rewarding path based on individual contributions as this defect. As opposed to one insuring that the overall result benefits is evenly distributed among all the engaged parties.
The obvious outcome will be that the most skilled pretenders optimizing for their selfish profit narrow view, no matter what the consequences will be for the collectivity on large scale and at long terms.
Assuming: simplicity === no unnecessary complexity.
In my (limited) experience as an engineer and manager, leadership (e.g., a VP) didn’t like (or reward) simplicity. Simple solutions often meant smaller teams, which wasn’t something they were pushing for, especially pre-2024. I do think this is slowly evolving, and that the best leaders now focus on removing unnecessary complexity and improving velocity.
Part of this from what I've seen is a large company problem, where developers exist underneath a thick layer of middle management.
In smaller companies it's a lot easier to express the distinctions and values of simplicity to ears that might actually appreciate it (so long as you can translate it into what they value - simple example is pointing out that you produced the same result in much less time, and that you can get more done on overall feature/task level as a result).
Not just simplicity, we are wired towards additive solutions, not substractive ones, on a problem we try to add more elements instead of taking out existing ones. And are those additions what counts, what are seen, not the invisible, missing ones.
Engineer B who can get that over complex solution working is the person you will turn to when complexity is required for the problem. They have experience in getting it to work, and such they really are worth more.
The real question is how do you tell engineer A who can figure out how to make the complex problems simple from engineer C who can't handle complexity and so writes simple solutions when the complex one is needed.
Not really, because even when complexity is required, the last thing you want is even more, unneeded complexity. There is no guarantee that the kind of complexity B brought to a problem is the exact same kind you're going to need somewhere else. It turns out that complexity is, shall we say, more complex than that.
Essentially, there are two parallel teams, one is seen constantly huddling together, working late, fixing their (broken) service. The other team is quiet, leaves on time, their service never has serious issues. Which do you think looks better from the outside?
Whatever is going on with each 'f' in this font is breaking my brain. Feels like the drunk goggles equivalent of dyslexia.
I don't think this phenomenon is unique to programming. My plumber was explaining how he put in a manifold and centralized whole-house off valve accessible indoors and I was like, okay, thanks? I can just turn it off at the street.
Only established professionals have the status and self-confidence to show restraint. I think that explains interviews.
Your plumber is probably making a good choice. Valves that don’t get exercised regularly can be all but guaranteed to be a pain in the ass when you need them to work.
We have a calendar reminder to exercise the valves in our house yearly, and the fact that they’re easy to get at helps make sure it’s a quick job, not a tedious one.
Not a plumber, but have lived in enough old houses with iffy valves to have been bitten a few times.
I feel this. I once worked for this manager, and whenever we finished a sprint, the first question he ALWAYS asked was "What tool(s) did you use/implement?" Many times, the answer was "No tools, I just banged out a bit of code to do the job.", only to get looked at for several seconds before he looked disappointed, and moved on. It was infuriating!
Yep people get promoted for bullshit throwaway projects that are built in the fastest & dirtiest way possible so that management can dance & clap about how brilliant everyone is about every 2-4 weeks.
Is it just me or could this apply to commentary as well? Sometimes, I set out to comment with all my thoughts and their intricacies related to the subject, but sometimes the simplest one contributes far more to the conversation. In my experience, simplicity enables others to more freely participate and contribute.
Charlie Munger once said >>> Show me the incentive, I will show you the outcome
Problem is in big tech -- the incentives are all aligned towards complex stuff
you wanna get promoted - do complex stuff that doesn't make sense. If you get promoted, then it also boosts the performance of your manager.
the only way to escape this nonsense is to work at smaller companies
right now you've people advocating for A.I coded solutions yet never realizing that A.I solutions result in a convoluted mess since A.I never possesses the ART of software engineering i.e knowing what to cut out
I interviewed at a company that used a simple project to screen candidates. It was implementing a cash register checkout system. The task was soo simple that I couldn't figure out what they were looking for. So I implemented the simplest thing possible. I got the job partially because they were impressed by my utterly simple solution. I helped evaluate other candidates given the exact same problem and it's amazing how some people dialed up the complexity to 11. None of them passed the screening.
It's not just that it looks good, there is constant pressure from other Engineers that we should "Do it right" and "Plan for the future" even if the future is murky and every design choice we take for scalability is probably just constraints that will hinder us if the requirements change.
as a manager its constant fighting the pressure to build "Great software" that is way above what the company needs instead building working software that addresses customer needs in a timely manner.
My dude we are s startup with two servers and 20 customers, we do not need infinite scalability.
Being able to solve problems with true simplicity is a master’s skill. The skill to recognize simplicity and its value is a skill as well.
You can try to explain this OP’s concept to a stakeholder in a 1000 different sensible ways and you’ll get blinking deer-in-headlight eyes back at you.
This skill is hard-earned and, so, rare.
Therefore, many hierarchies are built on sufficient mediocrity top to bottom.
Which works because bottom line doesn’t often matter in software dev anyway.
And even when it does matter it’s multiplicatively rare to have a hierarchy or even the market that it tries to serve who can build, comprehend, handle high power::complexity systems, products, tools.
I'm trying to sell simplicity to my target market, who I would call "semi-tech literate". Maybe it's stupid and I should sell whatever Forbes thinks is cool, but I just can't shake this feeling that I should be solving actual business problems.
We failed a bid for a project because of simplicity. We were to migrate a service running on an on-prem Kubernetes installation and a three, or five, node Apache Cassandra cluster to Azure.
The service saw maybe a few hundred transaction per day, total database size: 2 - 3GB. The systems would hold data about each transaction, until processed and then age it out over three months, making the database size fairly stable.
Talking to a developer advocate for Azure we learned that CosmosDB would get a Cassandra API and we got access to the preview. The client was presented with a solution were the service would run as a single container in Azure Websites and using CosmosDB as the database backend. The whole thing could run within the free tier at that point. Massive saving, much easier to manage. We got rejected because the solution didn't feel serious and to simplistic for an organisation of their scale.
On the other hand I also once replaced a BizzTalk server with 50 lines of C# and that was well received by the client, less so of my boss who now couldn't keep sending the bill for a "BizzTalk support contract" (which we honestly couldn't honour anyway).
Promotion Driven Development at its finest. There's no good way to fix this without better teams and less Lord of the Flies style management. Servant leadership helps here, but if your team is adversarial in nature there is no escape. A manager that needs an exciting story to get a feather in their hat will take the story every time over "+20 -8000" commit style developers. Your product will suffer accordingly.
A lot of this boils down to promo system being so systematized. I've never heard of people in any other field min/max their promotions as hard along with all of the expert jargon in any other field I've worked in. Packets, peers, comp, other co comps, what your boss thinks of you, what your boss thinks of your peers (nee: competitors), and the inevitable crash out when they don't get the promotion. All part of the bigco experience! I feel like when we systematized comp into ranks Lx, Ly we gave up our leverage a little bit.
If I was an engineering manager in an org which actually valued getting sh*t done - vs. bragging rights, head counts, and PHB politics - then I'd notice within a month that Engineer A (who the article has shipping in a couple days) got far more done then Engineer B (who needed 3 weeks).
And long before performance review time, I'd have mentioned further up that A was looking like a 5X engineer - best if we keep her happy.
I once hacked a spreadsheet in a week that was good enough to not embark on a multiple-months 3-devs project.
In the same team, I tweaked a configuration file for distributed calculations that shaved 2 minutes of calculation on an action that the user would run thirty times a day.
I got paid all right.
People don't give a shit about complexity or simplicity. They care about two things:
1. Does it work
2. How soon can you ship
There is a third thing that stakeholders really like: when you tell them what they should be building, or not building.
I said I’d move them to google sheets. There was about five minutes of awkwardness after that as I was interviewing for software developer. I was supposed to talk about what kind of tool I’d build.
I found it kind of eye opening but I’m still not sure what the right lesson to learn was.
He got the prompt, asked questions about throughput requirements (etc.), and said, “okay, I’d put it all in Postgres.” He was correct! Postgres could more than handle the load.
He gets a call from Patrick Collison saying that he failed the interview and asking what happened. He explained himself, to which Patrick said, okay, well yes you might be right but you also understand what the point of the interview is.
They made him do it again and he passed.
The purpose of the interview is for the candidate to demonstrate their thought process and ability to communicate it. “Just use Postgres” doesn’t do that.
This would be more obvious if it was a LeetCode problem and the candidate just regurgitated an algorithm from memory without explaining anything about it. Yeah it’s technically the right answer but the interviewer can’t tell if you actually understand what you’re talking about or if you just happened to memorize and answer that works.
Interviews are about communication and demonstrating thought process.
That being said, it's also on the ones giving the interviews to push the candidates and ensure that they really are receiving the applicants best. The interviewers don't want to miss potentially great candidates (interviews are hard and nerve-wracking, and engineers aren't known for their social performance), and thus sometimes need to help nudge the candidates in the right direction.
There's a ton of incredibly talented neurodivergent people in our ecosystem who would trip up on that question just because of how it's framed
Because how is the interviewee to know if you're testing for the technically sophisticated answer no one in their right mind would ever write or the pragmatic one?
From one side, we call ourselves problem solvers, on the other hand we are not satisfied with simple solutions to these problems. If im interviewing for a job, i should be expected to behave and solve hypothetical problems the way id do it on the job. If that screws up your script, you probably suck at hiring and communicating your expectations.
I'd assume that if he got a call from Patrick himself and a second opportunity to get interviewed, that's already a cue for interviewers to pass him regardless of what he says?
Exactly, it's also a test of ability to conform. Especially useful to weed out rogue behavior picked up in startups.
If a valid answer was “just use Postgres” then it just wasn’t a very good interview question.
In real life, the answer almost certainly would be “just use Postgres” and everyone would be happy.
If the interview wants you to think about stuff that never happens in your role, I think it is a sign that in your role, you're expected to solve the problems like in the interview.
I had this happen in a Google interview. I did back of the envelope math on data size and request volume and everything (4 million daily events, spread across a similar number of buckets) and very little was required beyond that to meet performance, reliability, and time/space complexity requirements. Most of the interview was the interviewer asking "what about" questions, me explaining how the simple design handled that, and the interviewer agreeing. I passed, but with "leans" vs. "strong" feedback.
And yes, I have done this on a second Google interview about 15 years ago.
They want a conversation to see how you think, not an actual answer.
Which is stupid, because they asked a question that the person didn't need to think to answer. So they didn't get to see them think.
I would hire the "just use postgres" dude in a heartbeat without re-testing, if the numbers made sense, and perhaps give a stern talking-to to the interviewers. But then again I'm not a unicorn founder, so what do I know.
This is the same issue that was prevalent when the industry switched from HDD to SSD: some system design questions suddenly became trivial, because the IOPS went up by magnitudes. This is not a failure of the interviewees, who correctly went with the times, but a failure of the interviewers.
So the point is? I honestly dont understand.
Sometimes the point of the interview is to see if the candidate knows an existing solution and "just use postgres" is the good answer. Sometimes it's to test the technical chops and pretending postgres doesn't exist is the point.
The candidate shouldn't be expected to always guess right, unless the position says "a psychic". The interviewer should notice if the candidate is solving the wrong kind of problem and nudge them in the right direction instead of punishing the candidate for the wrong guess.
The question is framed to you as a way for you to show you know x, y and z and talk about x, y and z.
Even if a valid solution is just do a, that's great. But the interviewer has no idea if you actually know about x ,y and z do they ?
I like that this sentence can be read both as a productive, well-meaning view on interviews, as well as a highly cynical one.
Also makes me wonder if the person will keep showing how much they know and how smart they are after they are hired, and if that is a good thing.
Saying “just use Postgres” and then providing no explanation for why you believe Postgres is sufficient for the job, a general explanation of what type of hardware and architecture you’d use, and other details is not answering the question.
I feel like lots of people just follow the happy path and don't understand that complexity incurs real cost.
Why on Earth did the company have to be so willingly obtuse and stupid about it including what sounds like the CEO (well at least he gave him another shot, but there doesn't need to be implicit assumptions about the "point of the interview", just come out and address it head on explicitly.)
I realize this is part of an interview game, but perhaps the best response is still to ask why this is a problem in the first place before you launch into a technical masterpiece. Force the counterparty to provide specific (contrived) reasons that this practice is not working for the business. Then, go about it like you otherwise would have.
I do dislike interviews where a candidate can fail simply by not giving a specific, pre-canned answer. It suggests a work culture that is very top-down and one that isn’t particularly interested in actually getting to the truth.
Sometimes we'll ask market sizing questions. We will say it's a case question, it's to see their thought process, they're supposed to ask questions, state assumptions, etc.
Occasionally we'd get a candidate that just doesn't get it. They respond "oh I'd Google that". And I'll coach them back but they just can't get past that. It's about seeing how you approach problems when you can't just Google the answer, and we use general topics that are easily accessible to do so. But the downside is yes, you can google the answer.
Spreadsheets are a tricky one some people like the power and automomy they have with spreadsheets.
But more often spreadsheets are the only way to transfer data between solos and it wastes a lot of time and is error-prone.
If they are collectively spending 1hr/mo on the spreadsheet then it’s not worth an SWE’s time to optimize it. If they are spending 4hr/day on the spreadsheet then it’s a prime candidate for automation.
These days if I were interviewing someone and they said, "I'd use the simple solution that is fairly ubiquitous", I'd say, "yes! you've now saved yourself tons of engineering hours - and you've saved the company eng money".
Then after a brief discussion of that you could actually ask if the purpose of the question was for you to design a system to handle that situation and jump into the design.
I'd actually trust you to take on harder problems
Doesn't really matter what the situation is, there's much more that can be achieved in my book with that kind of mindset :)
I'm also of the opinion that in an increasingly LLM software written world, being able to have this kind of mindset will actually be really valuable
Exactly because that means less costs for software development when deliverying solutions.
As an interviewee it’s important to try and identify whether the group you’re interviewing with operates this way, literally: How will they get the money to pay for your salary? That way you avoid giving nom-starter answers to interview questions.
[0] https://stackoverflow.com/a/1831841/61938
So true...I've failed interviews, because the interviewee did see using library functions as a sign of weakness.
At least from the point of view of the interviewer, this was the point where they should give you a polite "hey, play along" nudge.
That may be the game, but we all know it's bullshit, and we shouldn't be playing along.
If a member of my team actually proposed building a bespoke system for something that can be straightforwardly done in a spreadsheet, we'd be having some conversations about ongoing maintenance costs of software
All interviews are contrived / artificial situations: The point is to understand the candidate's thought processes. Furthermore, we're getting Bilsbie's (op) take on the situation, there may be context that the interviewer forgot to mention or otherwise Bilsbie didn't understand / remember.
Specifically, if (the hypothetical situation) is a critical business process that they need an audit log of; or that they want to scale, this becomes an exercise in demonstrating that the candidate knows how to collect requirements and turn a manual process into a business application.
The interviewer could also be trying to probe knowledge of event processing, ect, ect, and maybe came up with a bad question. We just don't know.
Given that Bilsbie can't read their interviewer's mind, there's no way to know if that's what the interviewer wanted, or if the interviewer themselves was bad at interviewing candidates.
So now you get Engineer B's output even faster, with even more impressive-sounding abstractions, and the promotion packet writes itself in minutes too. Meanwhile the actual cost - debugging, onboarding, incident response at 3am - stays exactly the same or gets worse, because now nobody fully understands what was generated.
The real test for simplicity has always been: can the next person who touches this code understand it without asking you? AI-generated complexity fails that test spectacularly.
To be fair, a lot of the on call people being pulled in at 3am before LLMs existed didn't understand the systems they were supporting very well, either. This will definitely make it worse, though.
I think part of charting a safe career path now involves evaluating how strong any given org's culture of understanding the code and stack is. I definitely do not ever want to be in a position again where no one in the whole place knows how something works while the higher-ups are having a meltdown because something critical broke.
Biggest problem is that next person is me 6 months later :) but even when it’s not a next person problem how much of the design I can just keep in my mind at a given time, ironically AI has the exact same problem aka context window
But given how poorly bought software tends to fit the use case of the person it was bought for... eventually generate-something-custom will start making more and more sense.
If you end up generating something that nobody understands, then when you quit and get a new job, somebody else will probably use your project as context for generating something that suits the way they want to solve that problem. Time will have passed, so the needs will have changed, they'll end up with something different. They'll also only partially understand it, but the gaps will be in different places this time around. Overall I think it'll be an improvement because there will be less distance (both in time and along the social graph) between the software's user its creator--them being most of the time the same person.
I like to have something like the following in AGENTS.md:
## Guiding Principles - Optimise for long-term maintainability - KISS - YAGNI
So, I asked it to modify its instructions.md file to not repeat that mistake. The result was the new line "Avoid single-use wrapper functions; inline logic at the call site unless reused"
instructions.md is the new intern.
So, I asked it to modify its instructions.md file to not repeat that mistake. The result was the new line "Avoid single-use wrapper functions; inline logic at the call site unless reused"
instruction.md is the new intern.
Acting like people can't be good at their job is frankly dehumanizing and says a lot about your mindset with how you view other fellow devs.
Maybe that's "the manager's job", but that's just passing the buck and getting a worse solution. Every level of management should be looking for the best solution.
Avoid hands-on tech/team lead positions like hell.
In many ways, the Door Desk award was for simplicity. I remember, one time, someone got an award for getting rid of some dumb operations room with some big unused LCD TVs. When you won these awards, you rarely got any kind of reward. It was just acknowledgement at the meeting. But that time, they literally gave the guy the TVs.
(amzn 94-96)
As a consultant/contractor I always evangelise simplification and modelling problems from first principles. I jump between companies every 6-12 months, cleaning up after years of complexity-driven development, or outright designing robust systems that anybody (not just the author) can maintain and extend.
This level of honesty helps you build a reputation. I am never short for work. I also bill more than I could ever as a full-time engineer based in Europe.
"Reduced incidents by 80%", "Decreased costs by 40%", "Increased performance by 33% while decreasing server footprint by 25%"
Simplicity for its own sake is not valued. The results of simplicity are highly valued.
If there's no competent front-line technical management who can successfully make this simple comparison, then, sure, in that case the team may be fucked.
In general, I agree that you can and should judge (not necessarily measure) thing like simplicity and good design. The problem is that business does want the "increased this by 80%, decreased that by 27%" stuff and simplicity does not yield itself to this approach.
Building a system that's fast on day one will not usually be rewarded as well as building a slow system and making it 80% faster.
His response: "I can sell a good looking car and then charge them for a better running engine"...
https://www.youtube.com/watch?v=T4Upf_B9RLQ hits a little too close to home.
My experience is no one really gets promoted/rewarded for these types of things or at least not beyond an initial one-off pat on the back. All anyone cares about is feature release velocity.
If it's even possible to reduce incidents by 80% then either your org had a very high tolerance for basically daily issues which you've now reduced to weekly, or they were already infrequent enough that 80% less takes you from 4/year to 1/year.. which is imperceptible to management and users.
And at the same time it's impossible to convince tech illiterate people that reducing complexity likely increases velocity.
Seemingly we only get budget to add, never to remove. Also for silver bullets, if Big Tech promises a [thing] you can pay for that magically resolves all your issues, management seems enchanted and throws money at it.
That's regardless of the lip service they pay to cost cutting or risk reduction. It will only get worse, in the AI economy it's all about growth.
Being obviously pedantic here, I agree with what you meant.
Don't even get me started on the resume-driven development that came along with it.
And maybe I'm completely wrong. This is a perspective of one.
One common example I cite is at one job I owned Kafka and RabbitMQ clusters. Zero consideration was given to message size recommendations and we had incidents on the regular because some application was shoving multi-hundred megabyte messages into RMQ. They'd do other stupid shit like not ack their messages which would cause them to never be removed from local disk. This was a huge org, public company, hiring "only the best and brightest".
Management endlessly just threw more hardware at it rather than make the engineers fix their obviously bad architecture. What a headache. Some companies take the "prioritize engineer happiness" thing right off a cliff.
In the hands of an experienced developer/designer, AI will help them achieve a good result faster.
In the hands of someone inexperienced, out of their depth, AI will just help them create a mess faster, and without the skill to assess what's been generated they may not even know it.
It's good at code generation, feature wise, it can scaffold and cobble together shit, but when it comes down to code structure, architecture, you essentially have to argue with it over what is better, and it just doesn't seem to get it, at which point it's better to just take the reins and do it yourself. If there's any code smells in your code already, it will just repeat them. Sometimes it will just output shit that's overtly confusing for no reason.
I feel/am way more productive using chatgpt codex and it especially helps me getting stuff done I didn't want to get started with before. But the amount of literal slop where people post about their new vim plugin that's entirely vibecoded without any in-depth thinking about the problem domain etc. is a horrible trend.
There's always going to be some overlap, wanting to use a new skill/library in a production system, but maybe in general it's best to think of learning and writing/generating production code as two separate things. AI is great for learning and exploration, but you don't want to be submitting your experiments as PRs!
A good rule of thumb might be can you explain any AI-generated design and code as well as if you had written it yourself? If you don't fully understand it, then you are not in a good position to own it and take responsibility for it (bugs, performance, edge case behavior, ease of debugging, flexibility for future enhancement, etc).
Too often the smallest changeset is, yes, simple, but totally unaware of the surrounding context, breaks expectations and conventions, causes race conditions, etc.
The good bit in tfa is near the end:
> when someone asks “shouldn’t we future-proof this?”, don’t just cave and go add layers. Try: “Here’s what it would take to add that later if we need it, and here’s what it costs us to add it now. I think we wait.” You’re not pushing back, but showing you’ve done your homework. You considered the complexity and chose not to take it on.
https://www.youtube.com/watch?v=SxdOUGdseq4
The answer to this is almost always "NO" in my experience, because no one ever actually has good suggestions when it comes up. It's never "should we choose a scalable compute/database platform?" It's always "should we build a complex abstraction layer in case we want to use multiple blob storage systems that will only contain the lowest common denominator of features of both AND require constant maintenance AND have weird bugs and performance issues because I think I'm smarter than AWS/Google AND ALSO we have no plans to actually DO that?"
/I'm not bitter...
Instead you talk about how you complete all your tasks and have so much bandwidth remaining compared to all your peers, the beneficial results of simplicity. Being severely under used while demonstrating the ability to do 2x-10x more work than everybody else is what gets you promoted.
In this vein simplicity is like hard work. Nobody gives a shit about hard work either. Actually, if all you have to show is how hard you work you are a liability. Instead its all about how little you work provided and that you accomplish the same, or more, than everybody else.
It's not just the most "elaborate system". The same thing happens in so many other ways. For example a good/simple solution is one and done. Whereas a complex one will be an interminable cause of indirect issued down the road. With the second engineer being the one fixing them.
Then there's another pattern of the 10x (not the case with all 10x-ers) seeding or asked to "fix" other projects, then moving on to the next, leaving all the debt to the team.
It's really an amazing dynamic that can be studied from a game theoretical perspective. It's perhaps one of the adjacent behaviors that support the Gervais principle.
It's also likely going to be over soon, now that AI is normalizing a lot of this work.
I built a showback model at a prior org. Re-used shelfware for the POC, did the research on granular costs for storage, compute, real estate, electricity, HVAC maintenance, hardware amortization, the whole deal. Could tell you down to the penny how much a given estate cost on-prem.
Simple. Elegant. $0 in spend to get running in production, modest spend to expand into public cloud (licensing, mainly). Went absolutely nowhere.
Got RIFed. On the way out the door, I hear a whole-ass team did the same thing, using additional budget, with lower confidence results. The biggest difference of all? My model gave you the actual cost in local currency, theirs gave you an imagined score.
The complexity (cost of a team, unnecessary scoring) was rewarded, not the simplicity.
It's hard to keep things simple. Management should be mindful of that and encourage engineers to follow YAGNI, to do refactorings that remove more code than they add, etc.
> It also shows up in design reviews. An engineer proposes a clean, simple approach and gets hit with “shouldn’t we future-proof this?” So they go back and add layers they don’t need yet, abstractions for problems that might never materialize, flexibility for requirements nobody has asked for. Not because the problem demanded it, but because the room expected it.
$100 says the "clean, simple" approach is the one which directly couples the frontend to the backend to the database. Dependencies follow the control flow exactly, so that if you want to test the frontend, you must have the backend running. If you want to test the backend, you must have the database running.
The "abstractions for problems that might never materialize" are your harnesses for running real business logic under unit-test conditions, that is, instantly and deterministically.
If you do the "simple" thing now, and push away pesky "future-proofing" like architecting for testing, then "I will test this" becomes "I will test this later" becomes "You can't test this" becomes "You shouldn't test this."
[1]: https://www.cs.utexas.edu/~EWD/ewd13xx/EWD1305.PDF
Why? We learn all these cool patterns and techniques to address existing complexity. We get to fight TRexes… and so we get paid good money (compared to other jobs). No one is gonna pay me 120K in europe to build simple stuff that can work in a single sqlite db with a php fronted.
The honest opinion no one wants to hear is that programmers do not deserve the money they are paid for because MOST of the time what it's really needed is a "single sqlite db with a php frontend".
If every company I know does this, how am I suppose to make money?
There are reasons for "unnecessary" complexity. Mainly cost and time.
The environment where the over-engineer tends to be promoted is one where the engineering department is (too) far separated from where the end users are. Think of very large organizations with walled departments, or organizations where there simply is not enough to do so engineers start to build stuff to fight non existing issues.
— C. A. R. Hoare
More than once I have seen the same project yield two separate promotions, for creating it and deleting it. In particular this happens when the timescale of projects is longer than a single engineer's tenure on a given team.
But yes, avoiding complexity is rarely rewarded. The only way in which it helps you get promoted is that each simple thing takes less time and breaks less often, so you actually get more done.
a) finding a new theoretical frame that simplifies the space or solutions, helps people think through it in a principled way
b) finding ways to use existing abstractions, that others may not have been able to find
c) using non-technical levers, like working at the org/business/UX level to cut scope, simplify requirements,
The way you can make complexity work for your career, is to make it clear why the problem is complex, and then what you did to simplify the problem. If you just present a simple solution, no one will get it. It’s like “showing your work”.
In some orgs, this is hopeless, as they really do reward complexity of implementation.
In fact, simplicity often is the best future-proofing. Complex designs come with maintenance costs, so simple designs are inherently more robust to reorgs, shifted priorities, and team downsizing.
> But for Engineer A’s work, there’s almost nothing to say. “Implemented feature X.” Three words. Her work was better. But it’s invisible because of how simple she made it look. You can’t write a compelling narrative about the thing you didn’t build. Nobody gets promoted for the complexity they avoided.
Well, Engineer A's manager should help her writing a better version of her output. It's not easy, but it's their work. And if this simpler solution was actually better for the company, it should be highlighted how in terms that make sense for the business. I might be naive and too optimistic but good engineers with decent enough managers will stand out in the long run. That doesn't exclude that a few "bad" engineers can game their way up at the same time, even in functional organizations. though.
There's a significant asymmetry though, it's not just a bit more work. I'm a bit cynical here, but often it's easier to just overengineer and be safe than to defend a simple solution; obviously depending on the organization and its culture.
When you have a complex solution and an alternative is stacked up against it, everything usually boils down to a few tradeoffs. The simple solution is generally the one with the most tradeoffs to explain: why no HA, why no queuing, why no horizontally scalable, why no persistence, why no redundancy, why no retry, etc. Obviously not all of them will apply, but also obviously the optics of the extensive questioning will hinder any promotion even if you successfully justify everything.
Simpler than what? The reason this phenomenon is so pervasive in the first place is that people can’t know the alternatives. To a bystander (ie managers), a complex solution is proof of a complex problem. And a simple solution, well anyone could have done that! Right?
If we want to reward simplicity we have to switch reference frame from output (the solution), to input (the problem).
The engineer that consistently quotes 3x my expectation (and ends up taking 4x with bugs) is going to look way worse than the engineer that ships things quickly with no drama.
It could be something overbuilt, large organization structures. Brittle solutions that are highly performant until they break. Or products/offerings that don't grow for similar reasons, simpler-is-better, don't compete with yourself. Or those that grow the wrong way-- too many, much to manage, frailty through complexity, sku confusion.
Alternatively, things that are allowed to grow with some leeway, some caution, and then pruned back.
There's failure modes in any of these but the one I see most often is overreaching concern for any single one.
Promotions are supposed to incentivise people to stay, rather than leave. If the company never promoted anyone, people would leave. So there needs to be a path for promoting people. But that process doesn’t have to be transparent, or consistent, or fair - in-fact it rarely is.
You promote people who consistently overdeliver, on time, at or below cost, who are a pleasure to work with, who would benefit the company long term, who would be a pain to lose. A key precondition is that such people consistently get more done compared to other people with equal pay, otherwise, they don’t stand out and they are not promotion material.
What counts as overdelivering will vary based on specific circumstances. It’s a subjective metric. Are you involved with a highly visible project, or are you working on some BS nobody would miss if it got axed? Are you part of a small team, or are you in a bloated, saturated org? Are you the go-to person when shit hits the fan, or are you a nobody people don’t talk to? Are you consistent, or are you vague and unpredictable? Does your work impact any relevant bottom lines, or are you just part of a cost centre? It really isn’t rocket science, for the most part.
Numerous times I've seen promotions going to people who were visible but didn't do the actual work. Those who share the achievements on Slack, those who talk a lot, get to meetings with directors, those who try to present the work.
1. Do I consistently deliver more (in output, impact, or reliability) than peers at my pay level?
2. Is my work visible and tied to meaningful business outcomes, rather than low-impact tasks?
3. Am I known as dependable and easy to work with, especially under pressure?
4. Would the company feel a real loss-operationally or financially-if I left?
5. Have I made myself clearly more valuable to the organization than what I currently cost?
- CV-driven development. Adding {buzzword} with {in production} sounds better than saying I managed to make simple solutions faster.
- Job security. Those who wish to stay longer make things complicated, unsupportable, and unmaintainable, so they become irreplaceable.
Can you actually imagine a promo committee evaluating the technical choices? "Ok, this looks pretty complex. I feel like you could have just used a DB table, though? Denied."
Absolutely not! That discussion happens in RFCs, in architecture reviews, in hallway meetings, and a dozen other places.
If you want simplicity, you need to encourage mentorship, peer review, and meaningful participation in consensus-building. You need to reward team-first behavior and discourage lone-wolf brilliance. This is primarily a leadership job, but everybody contributes.
But that's harder than complaining that everything seems complicated.
A committee with no skin in the game, who knows? But a manager who actually needs stuff done, absolutely.
> The interviewer is like: “What about scalability? What if you have ten million users?”
This reminded me of how much more fun I find it to work on software that always only has one user, and where scaling problems are related solely to the data and GUI interaction design.
It can even happen that the tag "very smart" gets attached to those sidelined engineers. That's not necessarily a compliment.
The obvious outcome will be that the most skilled pretenders optimizing for their selfish profit narrow view, no matter what the consequences will be for the collectivity on large scale and at long terms.
In my (limited) experience as an engineer and manager, leadership (e.g., a VP) didn’t like (or reward) simplicity. Simple solutions often meant smaller teams, which wasn’t something they were pushing for, especially pre-2024. I do think this is slowly evolving, and that the best leaders now focus on removing unnecessary complexity and improving velocity.
In smaller companies it's a lot easier to express the distinctions and values of simplicity to ears that might actually appreciate it (so long as you can translate it into what they value - simple example is pointing out that you produced the same result in much less time, and that you can get more done on overall feature/task level as a result).
The real question is how do you tell engineer A who can figure out how to make the complex problems simple from engineer C who can't handle complexity and so writes simple solutions when the complex one is needed.
Essentially, there are two parallel teams, one is seen constantly huddling together, working late, fixing their (broken) service. The other team is quiet, leaves on time, their service never has serious issues. Which do you think looks better from the outside?
I don't think this phenomenon is unique to programming. My plumber was explaining how he put in a manifold and centralized whole-house off valve accessible indoors and I was like, okay, thanks? I can just turn it off at the street.
Only established professionals have the status and self-confidence to show restraint. I think that explains interviews.
We have a calendar reminder to exercise the valves in our house yearly, and the fact that they’re easy to get at helps make sure it’s a quick job, not a tedious one.
Not a plumber, but have lived in enough old houses with iffy valves to have been bitten a few times.
Problem is in big tech -- the incentives are all aligned towards complex stuff
you wanna get promoted - do complex stuff that doesn't make sense. If you get promoted, then it also boosts the performance of your manager.
the only way to escape this nonsense is to work at smaller companies
right now you've people advocating for A.I coded solutions yet never realizing that A.I solutions result in a convoluted mess since A.I never possesses the ART of software engineering i.e knowing what to cut out
Good leaders perceive workhorse vs showhorse spectrum, critical toil vs needless flash (and vice versa).
It’s hard. Most fail at hard things. The industry in the aggregate will fail at hard things
So you get articles like this.
as a manager its constant fighting the pressure to build "Great software" that is way above what the company needs instead building working software that addresses customer needs in a timely manner.
My dude we are s startup with two servers and 20 customers, we do not need infinite scalability.
You can try to explain this OP’s concept to a stakeholder in a 1000 different sensible ways and you’ll get blinking deer-in-headlight eyes back at you.
This skill is hard-earned and, so, rare.
Therefore, many hierarchies are built on sufficient mediocrity top to bottom.
Which works because bottom line doesn’t often matter in software dev anyway.
And even when it does matter it’s multiplicatively rare to have a hierarchy or even the market that it tries to serve who can build, comprehend, handle high power::complexity systems, products, tools.
it just isn't very appetising
The service saw maybe a few hundred transaction per day, total database size: 2 - 3GB. The systems would hold data about each transaction, until processed and then age it out over three months, making the database size fairly stable.
Talking to a developer advocate for Azure we learned that CosmosDB would get a Cassandra API and we got access to the preview. The client was presented with a solution were the service would run as a single container in Azure Websites and using CosmosDB as the database backend. The whole thing could run within the free tier at that point. Massive saving, much easier to manage. We got rejected because the solution didn't feel serious and to simplistic for an organisation of their scale.
On the other hand I also once replaced a BizzTalk server with 50 lines of C# and that was well received by the client, less so of my boss who now couldn't keep sending the bill for a "BizzTalk support contract" (which we honestly couldn't honour anyway).
I sometimes feel like that's what it is. Simple solutions make some people feel unimportant.
A lot of this boils down to promo system being so systematized. I've never heard of people in any other field min/max their promotions as hard along with all of the expert jargon in any other field I've worked in. Packets, peers, comp, other co comps, what your boss thinks of you, what your boss thinks of your peers (nee: competitors), and the inevitable crash out when they don't get the promotion. All part of the bigco experience! I feel like when we systematized comp into ranks Lx, Ly we gave up our leverage a little bit.
And long before performance review time, I'd have mentioned further up that A was looking like a 5X engineer - best if we keep her happy.
What happened?
I once hacked a spreadsheet in a week that was good enough to not embark on a multiple-months 3-devs project.
In the same team, I tweaked a configuration file for distributed calculations that shaved 2 minutes of calculation on an action that the user would run thirty times a day.
I got paid all right.
People don't give a shit about complexity or simplicity. They care about two things:
1. Does it work
2. How soon can you ship
There is a third thing that stakeholders really like: when you tell them what they should be building, or not building.