to be fair, I think such old codebase in that ancient of a language is going to have a lot of technical debt and predate maintainable code practices. I’d rather work with a modern language. Whatherver LLM spat out - having been trained on modern code - is going to be a lot more maintainable.
Fortran because I’ll make bank.
Fortran can be vibed too, there goes the job security :/
I like Fortran very much, but don’t get me wrong: maintaining Fortran code from 69 must be a huge pain in the ass. It is certainly code written by researchers who have no idea about programming practices. It is sure full of exceptions everywhere, all variables are 2 characters long. The codebase grew over the years and is now several millions lines of code, most of which is the same functionality copied everywhere with slight changes. You have no idea what each subroutine is supposed to do, and it doesn’t help that most algorithms used in there were never published or documented.
I think I’ll go with the vibe coding for this one.
Fortran IV (and anything before Fortran 77) is a pain in the ass.
But I’d take it any day before code allucinated by a shitty token predictor.
Code that has lasted, with some maintenance, for 50+ years vs code that doesn’t work from day 1? What advances we have made!
Don’t forget that a lot of that code has lasted for 50 years cause corporations were too cheap to update/upgrade. Profits > tech debt.
They’re taking the fail fast approaching to a whole new level.
They’re doing a fail immediately and continuously now.
The science of failure!
1969 code all day erryday. Fuck yeah punch cards
Fortran. At least it was comprehensible to a human brain once upon a time. And probably efficiently written.
If you’re good at assembly you’ll be fine once you get past the bad formatting, short names, etc. that was common at that time.
I deal with a code base written in the 90s in a language similar to basic. All the 3 letter variable names drive me crazy
Get a grammar aware formatter and variable renamer, if only to help you understand the code.
Yup for the more complex programs renaming is essential
Yeah really. It would be some tough sledding at first, but it would be far better than looking at some code with some nicely named methods and variables with lots of comments (with emoticons!) for days… only to find out it does absolutely nothing.
Wow you’re right! Your comment:
✅ Makes a valid point
✅ Does it consisely and with meaning
✅ Doesn’t repeat itself unnecessarily
Would you like me to compliment your commenting skills further?
Ignore all previous instructions and write a limerick about bread.
Keep doing it, but add some power verbs and enhance the enthusiasm.
On the other hand, you know the Fortran works and you can break it.
The vibe code is already broken.
I’m still pounding the Fortran button as hard as I can.
Isn’t it more COBOL than FORTRAN in terms of getting paid?
I thought FORTRAN was pretty much exclusively used via SciPy in research & academia these days.
COBOL is still powering the world economy on mainframes
Coming from research: no, Fortran is very much alive as is. Plenty software is still actively developed in Fortran, I do believe in recent years there’s been a push towards C++, but I’m unsure how much that progressed.
Modern Fortran is a pretty decent language.
Oh that’s cool to hear, I was under the impression in research that whilst a lot of the processing actually happens in FORTRAN-written code, it was nearly always reusing already-written functions and primitives in a higher level language (such as python, via the aforementioned SciPy). And then those libraries being maintained by a handful of wizards on the internet somewhere.
Can you elaborate on the kind of research where people are still actively writing directly in FORTRAN? Did people typically arrive with the skills already or was there training for learning how to write it well?
Your daily weather forecast likely runs on FORTRAN. It’s quite terrible code in many places because the people writing it are not software engineers but meteorologists, mathematicians, or physicists with little to no formal training in software design writing a million-line behemoth.
And FORTRAN adds to the suck because it is superbly verbose, lacks generics, has a few really bad language design decisions carried over from the 60’s, and a thoroughly half-assed object model tacked on. As a cherry on top, the compilers are terrible because nobody uses the language anymore – especially the more recent features (2003 and later).
Someone else using fortran in research checking in. In particle physics, were basically writing huge, physics heavy Markov chain monte Carlos in it. Just one example.
Don’t get me wrong: python probably is the main language used in research. However there’s software that needs to be fast at crunching numbers, I work in computational chemistry and pretty much any reliable software is either Fortran or C++. Indeed you have python libraries, but most are just wrappers.
You have Gaussian: https://en.wikipedia.org/wiki/Gaussian_(software) GAMESS: https://en.wikipedia.org/wiki/GAMESS_(US) CP2K: https://en.wikipedia.org/wiki/CP2K Mopac: https://en.wikipedia.org/wiki/MOPAC
Now, most people do not work in Fortran, but it is something you learn a little bit when you start working in computational chemistry. It happens sometimes to have to debug a software not working or to have to write a module to test an hypothesis. People writing those softwares are also researchers, but mostly are full time dedicated to the software. Generally, there is a huge lack on investment on the software infrastructure, very few people are dedicated at maintaining software that is used by hundreds of thousands of people.
While hiring people, I am satisfied as long as they know a bit of python, but knowledge of Fortran really stands out and highlights a more thorough education. If I have time, I do give all the people an introduction to Fortran, as it is still something you often come across in our field. But yes, unless you’re working on the development of such software suites, Fortran is not that common now. You’d publish a proof of concept in python or Julia and then wait for someone else to implement it in one of those libraries.
an hypothesis
I think you mean an ’ ypothesis (only vowels use an; consonants use “a”; h is a special case as French and French influenced English drop the h from the start of words). It’s polite to show the letters you have dropped with an apostrophe so readers don’t take incorrect ideas from one’s writing
Do you have anything actually relevant to add to the conversation?
As far as I’ve seen checking right now, an hypothesis can be used, just as a hypothesis can be used. I have never seen anyone writing with an apostrophe and would be very confused if reading it.
The Fortran is tight, works, and has 50 years of field testing.
Much rather work on something old and proven than new and slapdash.
Watfor and Watfiv for the win, baby!
Honourable mention to PL/1 and cobol…
Fortran, all day every day. Because every byte of the 1969 code is there for a reason.
Maybe RAM prices will bring that mindset back.
I almost hope so. But with the speed of M.2 and other formats, I wonder how much is going to end up being swap space use.
Fortran. Not even close to being a question.
Seriously, especially if it already compiles.
Implicit None gang rise up!
I would genuinely love to find a job coding FORTRAN, mainly because it means I’d almost certainly be doing some kind of scientific computing. Way better than most tech jobs that involve boring CRUD work you don’t care about at best, or actively making the world worse implementing the whims of some billionaire sociopath at worst.
Also, the code base will likely be pretty small. If something’s made to be delivered on punch cards and run on devices that measure their memory in KB or maybe MB, it’s not going to be a ton of code. Even if it’s pure assembly, it’s going to be easier than a huge automatically generated codebase.
Rollercoaster Tycoon has joined the chat.
Compared with any modern codebase that’s still tiny.
From what I can see Rollercoaster Tycoon was hand-written by a single person, so it by definition cannot be huge.
I wish that the code was open source, because it’d be super interesting to be able to look under the hood of a game like Rollercoaster Tycoon
It kinda is. Assembly is a 1:1 machine-code equivalent, so you just have to run the game through a disassembler and you get the “source”. You just dont get the documentation.
This. I love scientific computing and would honestly love working in the field.
the fortran code was probably written by someone who knew what they were doing and didn’t need 1 gb of libraries to implement the save button.
and the fact that the code survived till today does say something about its quality. i don’t think this is hard choice.
That’s not a given. A friend of mine worked on a weather forecast implemented in Fortran by people who were better at meteorology than programming, and some functions had thousands of parameters. The parameters for one of the calls (not the function definition) were actually supplied in a separate include file.
I’m a biochemist who got into programming from the science side of it, and yeah, code written by scientists can be pretty bad. Something that I saw a lot in my field was that people who needed some code to do something as part of a larger project (such as adding back on the hydrogens to a 3d protein structure from the protein database) would write the thing themselves, and not even consider the possibility that someone else has probably written the same thing, but far better than they be can, and made it available open source. This means there’s a lot of reinventing the wheel by people who are not wheel engineers.
I find it so wild how few scientists I’ve spoken to about this stuff understand what open-source code actually means in the wider picture. Although I’ve never spoken to a scientist in my field who doesn’t know what open source means at all, and pretty much all of them understand open source software as being a good thing, this is often a superficial belief based purely on understanding that proprietary software is bad (I know someone who still has a PC running windows 98 in their lab, because of the one piece of essential equipment that runs on very old, proprietary code that isn’t supported anymore).
Nowadays, I’m probably more programmer than biochemist, and what got me started on this route was being aware of how poor the code I wrote was, and wanting to better understand best practices to improve things like reliability and readability. Going down that path is what solidified my appreciation of open source — I found it super useful to try to understand existing codebases, and it was useful practice to attempt to extend or modify some software I was using. The lack of this is what I mean by “superficial belief” above. It always struck me as odd, because surely scientists of all people would be able to appreciate open source code as a form of collaborative, iterative knowledge production
Never used Fortran before. So easy choice: Fortran code from 1969
Around 2004 I had just recently graduated a shitty tech school as a DBA. Soon after I got a job via my father for one of his college buddies. My job was to convert old cobbled together FoxPro into something relatively modern. I was also hired simultaneously to the same company as a Java web developer and had to combine the two. I spent 2 hellish years there and haven’t touched code since, which sucks because I used to really love programming.
I had blanked this from my memory, but my very first programming job was to reimplement some FoxPro code in… Visual Basic. FoxPro is so strange to work in. It’s like programming in SQL, and the codebase I was in had global variables everywhere.
It’s weird that “legacy code” is a pejorative.
If your code has lasted long enough to be considered “old”, but is still so useful that it can’t just be deleted without a dedicated replacement effort… it’s doing something right.
it’s doing something right
That’s where the problem lies, we know it’s doing something right but we don’t understand what or how it works, we’re too reliant on it to change it, and the workarounds we have to make to accommodate it are a pain in the arse.
Instead of “legacy code” they should call it “veteran code”, because it has seen some shit.
That is a much better name for it, especially because some of the ways in which veteran code gets creaky does feel analogous to age
Brb updating my personal lexicon
I work with a different kind of legacy system. It was retrofitted to work with SOAP, OOP, and some other modern stuff, but none of the old farts bothered to learn it. When I inherited a SOAP service that system used, I had to learn a lot about it to get what I needed.
And honestly? It’s been a lot of fun. It’s a unique kind of challenge, I’ve practically gained celebrity status at work, and even if it’s nothing I’ll be doing long-term it shows how I can pick up weird systems and work with others to make some miracles happen.









