There's a couple ways to approach counterarguments for this. One which might resonate well with people in the tech industry is an economic argument. Something like this:
In general, you want to be as efficient with your resources (money) as possible. Hardware costs money. Programmer hours also cost money. Dissatisfied users may cost the most money.
Striking a balance in this requires having accurate and reliable estimates for about how much each of these costs to your organization (be it a company, academic research lab, free software group, or what-have-you).
If you are spending a lot on programmer salaries (relative to the benefit gained in lines-of-code in product) relative to hardware costs, because they have to spend a lot of time waiting for their code to compile or fighting sluggish servers or trying to optimize code past the point of diminishing returns, then perhaps it is worthwhile to invest a bit more in hardware, if you will save money on your programmers' salaries. Of course, maybe you're getting an even bigger savings in the end, because you end up with a better product that runs better on more peoples' computers -- again, you need to have fairly good estimates for how much each axis costs you.
If you have given your programmers lots of expensive hardware, beefy laptops and heavy servers, and the code quality is bad enough that your product is sluggish on users' computers or difficult to scale (since you've already invested heavily in servers for a small deployment) -- the cost to you from the hardware and/or user complaints, lost sales, etc may outweigh the cost of having your programmers work for longer on the product and deliver something better than runs on lighter machines.
This is just one approach to arguing the core point. Here's a more philosophical one.
When we write software, we're usually trying to enable people to do something they couldn't do before, and make that available to the largest number of people possible (as economically feasible -- we can't go on and make more things if we don't recoup some cost from it). By having your software require more resources, fewer people can run it. This wasn't always a big issue, since you could be assured that in a couple years, hardware would have improved and more people would be able to run your software. However, that's leveling out, and is no longer always reliable.
Personally, I fall into the camp of wanting to write, as you say, "clean, efficient code", because I like being assured that I know at a fairly granular level what my code is doing on the machine, using the smallest amount of computational resources possible with a reasonable amount of flexibility, and finding local optimums for these constraints. But I'm a bit of a perfectionist, this line of reasoning may not work for everyone.
Finally, it's worth mentioning that different aspects of computer hardware are improving at different rates.
CPU clock speed stopped increasing around 15 years ago, and has been level or even slightly declining for power/heat reasons.
# of cores per chip is increasing, but adding more cores has diminishing returns for individual pieces of software / algorithms.
Amount of RAM is increasing, but the cost of accessing RAM is increasing relative to processor clock speed.
GPU performance is increasing, but this only helps for heavily parallel tasks (like machine learning) and graphics -- it won't boost your Java VM performance.
Disk sizes are increasing, although the average person's disk size is hovering right now as everyone switches over to SSDs. Most consumer grade laptops have either 1TB HDDs or 120GB SSDs -- quite a range.
Disk access speeds are increasing as people switch over to SSDs, but lifetime is decreasing because of limitations on the number of writes an SSD can perform.
Network speeds are increasing some places, and stagnating others, depending on your ISP and competition in the area.
Personal computers are gradually giving way in popularity to mobile devices -- these make almost all of the above factors temporarily decrease if you look at averages. It also changes the way people use devices, and the kind of overhead you're looking at for simply performing basic system tasks. All android apps have to run in/alongside the Java VM in the end, for example.
So depending on your problem, it may not actually be possible to throw hardware at it indefinitely. Especially CPU bound single-threaded tasks are really not scaleable any more -- pretty much your only option is to optimize, and you can only do that if you actually have some idea what your hardware is doing. That's part of why we established Handmade Network -- to make an explicit statement that these things matter, and will continue to matter, and to provide resources for people to learn about the way computers work today so that they can write the better, cleaner code without actually investing much more time or effort in their future software efforts.