Try the custom AI to help you find products that Reddit loves.
If there is one book to read about testing it is Working Effectively with Legacy Code. And if there is one thing to remember from it, it's
> Legacy code is untested code
So stop writing legacy code.
The definitive book about this is "Working Effectively with Legacy Code" by Michael Feathers. My personal bias is: if you can improve the code, do it, there's no reason not to. It sounds like this is a personal project with no real deadline or consequences, is that correct?
I bet ADHDers face greater difficulties with this task. Medication will most likely help you. I also highly recommend a classic book by Michael Feathers, Working Effectively with Legacy Code.
I recommend http://www.amazon.de/Working-Effectively-Legacy-Robert-Martin/dp/0131177052
It has one chapter that specifically addresses dealing with monster methods. Also, it contains lots of useful dependency breaking techniques which will help you to get the code in a better shape over time without adding too much overhead to your regular development. Legacy suffering high-five ;)
although not rails specific, this gets mentioned a bit in the ruby community.
For me, just pick something important and write a test. If your new to testing, I personally say start with testunit / minitest. They are simpler / easier to get going "in general". (flamebait)
Repeat at a sustainable rate. Use simplecov or something to help track your progress.
Not directly answering your question but I'd highly recommend reading "Working effectively with legacy code". Much of it you'll nod and think "that's obvious" but having stuff called out clearly makes you, or me at least, step back and really think how to do it in a particular case.
Once you have learned all the good practises, IMHO the most important book to read next is Working Effectively With Legacy Code, especially in this context of r/ExperiencedDevs.
I've been a developer for 28 years, and have done my fair share of green-field projects. And I personally believe that the most challenging (and therefore rewarding) work is taking the brittle pieces of software, that makes up the backbone of a company, and bring them to into the "fold" of modern and maintainable code projects.
Simple is hard and takes a lot work. Great engineers write simple code, fast, but it’s usually many years of experience that gets them there.
TDD isn’t a panacea, software design, especially at the holistic system level requires engineers that have an eye towards that.
Taking a legacy, ie zero or poorly tested code base, and making it something you can use TDD on is really challenging. I think it’s much better to intro teams on TDD in something green field. Not always possible, but turning a codebase around requires strong technical leadership.
I recommend this book, https://www.amazon.com/Working-Effectively-Legacy-Michael-Feathers/dp/0131177052/ref=sr_1_1?crid=14HZ0BJWD920A&keywords=working+effectively+with+legacy+code&qid=1665377039&qu=eyJxc2MiOiIwLjkxIiwicXNhIjoiMC42MiIsInFzcCI6IjAuNzIifQ%3D%3D..., if you ever find yourself in that situation again.
Check out this resource: Working Effectively with Legacy Code
This is a great book that teaches you how to "cover" parts of your application legacy code with tests so that you can modify them as you see fit and ensure you're not breaking something accidentally. It also contains a bevy of methods for "dependency breaking" so you can properly segment your codebase into easily-testable chunks.
In my whole career, this has so far been the absolute best way I have found to properly and easily bring a legacy codebase into the modern era. Good luck.
I still highly recommend Michael C Feather’s Working Effectively with Legacy Code. It contains a lot of practical approach with dealing with legacy code.
I am reading this book right now (and I highly recommend it):
Working Effectively With Legacy Code
The short answer: you write tests, break dependencies, "cover" code you're changing with more tests, and then make the changes safely. I think this is such a cool strategy because you get cleaner, better code (most of the time) with an expanded test suite.
If the problem is your knowledge of the code base and not your knowledge of the domain, then you can write characterization tests. Those are unit tests you write to keep the existing logic in tact so that everything remains the same except for the logic you're implementing.
See Michael Feather's Working Effectively with Legacy Code
Sound’s like a job for Michael Feather’s Working Effectively with Legacy Code
TDLR: Create unit test on what you want to touch. If it’s big, refactor by extracting out a method. Your test is meant to cover all scenarios - even if it’s wrong. This is called a Characterization Test. This is meant to keep intact the legacy behavior as much as possible while you introduce your changes. In legacy code, often a bug is actually what’s holding together other logic. So if you fix that, it might have a ripple effect and break other stuff
One good reference for refactoring in general is Working Effectively with Legacy Code. Many of the refactoring strategies and patterns discussed in the book are relevant to MATLAB programming.
e você realmente tem tudo isso de pontos de QI para perder? sorry, não resisti.
quanto ao seu questionamento: eu gosto de pensar que sempre temos oportunidades na maioria das situações que vivemos.
tem um livro bem interessante que li faz um tempo, se chama Working Effectively Legacy. é bem bacana. traz uma coleção de estratégia que olham pra refactors, manutenção, gestão, otimização, entre outros tópicos. lendo esse livro acredito que você consiga trazer alguma luz para o seu time e ainda fazer a diferença podendo se destacar na sua empresa.
That's... not how that works.
There's seriously entire books written about this subject whose central thesis is "you can't possibly make changes in a spaghetti environment without tests and be sure that you've haven't caused an issue elsewhere, so your #1 priority should be establishing a thorough, robust system of automated testing so that any and all code changes can be verified"
> Lol I’m aware of PHPUnit. And I know how to make test, I want to know more about testing
I am not aware of any courses, but there is a good book called Working Effectively with Legacy Code that basically talks about all kinds of testing strategies.
Even if u join a new organization there is no guarantee that u will find good quality code. You need to follow the boy scout rule and clean up the code. I have found this book Working effectively with legacy code and clean code extremely helpful.
I had few project where i had refactored code which had 3K lines of code in a class
I've usually been involved the other way, but the same principles apply. I recommend Working Effectively with Legacy Code as it's a great way to segment and peel off bits of functionality before rewriting it.
Yep, those are exactly the types of systems that are my bread and butter.
Have you read Working Effectively with Legacy Code?
There are plenty of techniques for iterating a tangled heap of spaghetti into a more manageable controlled mess.
I would second the recommendation for Test Driven Development: By Example by Kent Beck. I also got a lot out of xUnit Test Patterns by Gerard Meszaros.
One other comment: adding tests to an existing application is hard. Retrofitting an app with tests is a whole skill unto itself. I would recommend learning testing on some fresh apps and getting a good amount of practice that way before you try to add tests to your existing app.
Based on the URL I a going to guess the article says 'yes'
Edit: still not read the article, but if you do find yourself with legacy (read untested) code then get this and thank Michael Feathers!
Others have pointed out details about the code; but I want to highlight a sentence here
"I decided to make each logic gate its own struct because I thought that would give me maximum flexibility in the long-run"
This is a noble goal, but unless you can identify specific objective things that a design choice will do for you, then don't make the design choice. "maximum flexibility in the long-run" is not specific and objective. That's not to say that a restrictive design is the best either; but we rarely know what the best design is. Worse, the best design changes through the life of a piece of software. Start with the simple design and be prepared to change the design as you go. 
As you develop as an engineer, both through your own code and through reviewing the code of your peers, you'll identify good design choices and the reasons for using them; in that case go to town, but if you don't have specific reasons for doing something it can lead to this kind of morass that is difficult to work with and which will provide little value.
 To understand how to code in a way that facilitates changing design see Michael Feather's book Working Effectively with Legacy Code and imagine that the code you're writing today will be your legacy code tomorrow.
Reasonable advice for any code review legacy or otherwise...
If you are working with legacy code then it behoves you to have a copy of "Working Effectively with Legacy Code" by your side. Michael Feathers is great.
If you can't run, then a common recommendation is to use Working Effectively with Legacy Code by Michael Feathers.
Just hearing that its MainActivity is 1500 lines indicates to me that the code base is awful and the previous developers have stopped caring. Michael Feather's book has good advice for dealing with code like this, with no structure, no documentation, and no automated tests.
But mostly, I agree with the comment above saying 'RUN!'. This sounds like a really stupid project and you should start preparing to interview at other companies. It sounds like your organization has no idea how to manage software projects at all.
I'd recommend reading this book: https://www.amazon.com/Working-Effectively-Legacy-Michael-Feathers/dp/0131177052
Don't perpetuate the cycle! Make sure everything new that you write, and anything you modify, is tested and documented.
Rather than jumping in to the middle, search for whatever project management artifacts they have. Is there a Jira board, shared directory or similar which details what features have been requested and added over the years? This can be a good starting point to figuring out the codebase.
BTW, have you read the Feathers book on legacy code? It's pretty damn good and helped me a lot.
I'm not saying you're a bad developer. What I am saying is that Feathers is a rare genius.
(There might be a newer edition)
Change what you can, when you can, without introducing significant additional risk. Lead by example - write code the way you feel it should be written within the confines of the system.
Assuming you're using a VCS that was created in this century, delete dead (commented-out) code in files when you're in them for other purposes.
Do not randomly reformat code (especially tabs to spaces or vice versa) unless doing so will make the file internally consistent.
Do not rename things for the sake of renaming them (especially public .
You may find this book useful.
I have been in this situation, working with huge legacy systems and fixing arcane bugs. There is no quick fix to this.
Try giving this a go:
and read about rubber ducking. I found out that in my case most of the things I went to ask other devs got sorted out when I was explaining them the problem.
Hope it works out for you!
If there is an existing test harness, I'd try to write more tests. If not, setting up an test harness can really get you moving quickly. One reason is that you don't always have to wait for the app to spin up and then make sure you've put breakpoints in the right place. Even if the code is really hard to test, like a buggy legacy app, finding a "wedge" (i.e. anywhere that you can write and execute any kind of test) helps me to wrap my head around the structure of the codebase quickly. Working Effectively With Legacy Code has been a good resource for me.
Get the book Working with Legacy Code it's an amazing resource for this stuff.
Basically you need to get your code under test. In order to do that you have to sometimes flip dependencies or introduce a shim. There are many useful techniques outlined in the book.
As for legacy codes, I went to a coding dojo before and they introduced a book by Michael Feathers, the author is trying to say that legacy codes are with untested code bases. Here's the book "Working Effectively with Legacy Code".
It sounds like the first problem you have is that the code is not written in a way that is easily separated out. That is unfortunate, but unless you get a magic wand it is something you will have to deal with. It sounds like you have the .git sorted out, but really git is just both a parachute and a deployment tool - your problem is in the code.
Your basic setup sounds good with a development server. What I'd recommend is that you either run the developer server locally on your machine or run it in a virtual machine since you don't always seem to have access to a development server.
If you have no tests and you need to start, start at the top. Write some high level Acceptance tests in junit. These tests should be testing the wanted behavior of the application, the requirements. They should test the application through the high level interfaces.
Once you know your application does what it should you can go and start writing unit/integration tests against individual classes.
If you want to read up about testing legacy code then check this book
I have a similar problem with some of the legacy code I have to work with. There is a lack of testing, even now I have the coverage up to 30% any major changes are painful to do. I can never have confidence that I'm not breaking something. It is a hard job to try and turn the test coverage around because it will require a lot of time and it's classed as "Not essential".
If your feeling brave read this Working with legacy code, I've still not got round to looking at it but it's been recommended. Might give you a good idea of how to start getting your coverage up.
Working Effectively with Legacy Code is not C specific, but is widely recommended.
I've been here a few times in my decade-long career.
To start, this isn't something that only happens to junior developers. Trying to approach a large existing code base can be a real challenge, even when you have lots of experience.
You're not going to understand the whole application in a day. Probably not even in a week, and probably not even in a month. On some large code bases, I've regularly run into new code *years* after I first started working on the application.
What I've found helpful is to pick a small part of the application; preferably one that's related to a feature you're trying to add or a bug you're trying to fix. Find what looks like the entry point of that small part of the application. In a web app, it could be a method in a controller class. Or it could be a method in a service class somewhere.
Once you've found that entry point, read through the code one line at a time, and try to make sure you understand what's happening at each point. If the method you're in calls another method/function, jump to that and go through it one line at a time. On code that's particularly complex, I'll grab some sheets of lined paper, and devote one sheet to each method I go through.
As I go through each method, I'll write out the whole thing by hand as pseudocode. In order to do this, I have to understand what the code is doing. Some people might find it more effective to do this in a text editor. I find that there's something about the process of physically writing it out on paper that really helps cement my understanding.
Now, the whole writing out part isn't worth it if you just need to go in and do a quick bug fix. But if you've been handed responsibility for a chunk of code and you'll need to understand it deeply, I've found it to be a useful approach. I think it can still be helpful even if you're not solely responsible for a piece of code, but will have to work on it heavily.
Start by deeply understanding one important part of the code. Then move on to understanding another important part. Soon, you'll start to see patterns and understand how these important bits of code fit together.
If you're not yet sure what the important parts of the code for you to understand are, then a good way to find out would be to look at the repository's commit history to see which files have the most commits over time. The places that change the most often are likely the ones *you* are going to have to change, so they are a good place to begin. You can find instructions on how to do this here:
That assuming your code is in a Git repository. If you team uses Mercurial, you can look up instructions on how to do the same thing. If your team uses Subversion or heck, even CVS, you can probably accomplish the same thing. If your team doesn't use source control at all, then start spiking your morning coffee with rum or Kahlua because that will make your job significantly less painful.
For a look at using Git commit history to find the most important code - and the parts with the most technical debt - I enjoyed a book called Software Design X-Rays.
I've found the book Working Effectively with Legacy Code to be quite helpful in showing me different ways to approach an existing code base. Even if you don't apply all of the techniques the book suggests, I think it's still useful for finding out ways to find 'seams' in the code that you can use as points of attack when refactoring, adding features, or even just choosing a place to start learning a new bit of code.
If your employer will let you expense the cost of eBooks, you might find these interesting. If you can get access to Safari Books Online, both these books are available on there, along with a metric ton of great software development books. You might not need to pay for it - in my city, everyone with a public library account can access Safari for free. Maybe it's similar where you are?
Also, if you have a particularly frustrating day, feel free to come on Reddit and send me a DM. I might just have some useful advice. And if I don't happen to have useful advice on a particularly topic, I'll at least be able to come up with an on-topic smartass remark that will help you laugh and feel better about the code that frustrated you.
On this topic I’d like to recommend this book:
Check out the book Working Effectively With legacy Code
Working Effectively with Legacy Code https://www.amazon.com/dp/0131177052?ref_=cm_sw_r_apin_dp_XSB58CH2EK8YEHTWRPX8
> Testing is also pretty hard
This is true. Testing requires defining what you need to do clearly, and then define nothing else. Good testing is the guide to being a great engineer (the next step, to becoming an amazing engineer, is about empathy, awareness, and other emotional/administrative aspects). The problems you say signal a couple issues on the tests, things I'd immediately suspect without seeing the test (I may be wrong, you can't know until you see it).
> First not everyone was on board making it impossible to handle if only part of the team put their shoulders to the wheel
This isn't a problem of convincing the engineers. Testing is a problem of convincing management, with evidence and proof on the cost of errors and problems, and the advantage of saving time by catching it earlier. Generally you start with broader and more abstract tests (and unit tests because those are easy) and then start on getting a solid testing strategy. You will need data to make it valid. At some point you do have "enough testing" where it's cheaper to just let the bug through and fix it. This isn't easy, you need (multiple) amazing engineers to convince upper management.
Once you do, then it's easy for the engineers: it's part of the job. If you refuse to do it, then you're refusing to do your job, there's obvious consequences to that.
That said, you also will need to slowly improve the testing culture. It seems that could help.
> the issues that simple changes were breaking so many tests it was slowing us down, we had to refactor much of the test code.
That signals, a lot, that the implementation was being tested. Generally there's a few more symptoms: mocks are used a lot on the code, stuff works differently, etc. Generally what is happening is that the test code is testing how the code is written and how it actually gets to the solution. Whenever you change a thing, you also need to update the test because you changed how the code works. That is tests of implementation ensure that the code wasn't changed, so changing it will break it.
What you want to do is testing behavior. This is a bit harder. There's a minor version with behavior though, which is when you test multiple behaviors. So if your function changes (returning a different error, or handling an edge case differently) a bunch of tests break. Ideally only one test should fix that. In practice a few will even when you do effort.
Generally I see this happening because of an obsession with code-coverage. I feel it's a good general metric, but shouldn't be used with new functions and code as much. Focus should be on good enough tests. It's better to have lower coverage of useful tests, than high coverage that cover it. The reason this happens is that, in order to cover a specific branch, you begin to ensure what happens on that branch, and before you know it you're writing implementation details.
Whenever you have a test that fails and it wasn't a bug the test should be deleted. It's a false signal and just adds noise. It's better to realize there's less testing than originally thought. It may require a separate test.
> Now it's quite well suited for chsnges, but a bit too complex for my taste.
This actually happens without realizing it. A lot of times it's due to two separate issues. One it's trying to isolate tests too much, this is seen as too many mocks and fakes. This also falls into testing implementation, they're correlated but not strictly: you can have one without the other. It's ok to have real things. Sometimes you don't want to mock files, but you want to instead run against a RAMFS filesystem that's pre-set. While setting this up is its own complexity, it's generally handled well enough by libraries and the OS.
Also on that, not all functions need testing. At some point it's easier to test things at large scale (aka run the whole binary in well predicted manners and throw stuff at it without thinking more about the units themselves). Some things are just easier to catch at integration/end-to-end level. Trying to do it a unit test will leave you with something that is as hard, or even harder to debug.
The second issue is not taking the design cues of tests. A good test is as simple as possible, but it always reflects the complexity of the interface its testing. Ideally this should be the complexity of the problem itself. But if it feels harder, or more complicated, it may be signaling that your code is more complicated than it should be, and that there may be a better way to design it. It will make code easier to change and update in the future, and code will get better. OTOH tests should not make your code more complicated to use than it should be.
> We'll get there and we've found many bugs, but it's still a steep slope.
That's the right attitude. It's an ongoing work and perfect is always the enemy of better. But in that view tests should make things better always, if things get worse before they get better due to testing, that's a signal that the strategy for improving testing may not be the right one.
> Anyone got good in dept book/guide about automated testing?
Depends at what level we're working on. I've found that [Working Effectively with Legacy Code](smile.amazon.com/Working-Effectively-Legacy-Michael-Feathers/dp/0131177052) is a good guide when you're trying to start up testing. Even though the code you're dealing with is not legacy, it will quickly feel like it as better tested code comes up. I still go back to this book when I find myself having to help a project deep in technical debt and need to think of a strategy to somehow get enough testing and management on it we can move it forward.
To help devs and learn more about the things I talked about [Effective Unit Testing](smile.amazon.com/Effective-Unit-Testing-guide-developers/dp/1935182579) works pretty well, the wisdom applies to more than Java, and more than Unit Tests (though some things are specific to those two, be aware).
I also recommend running some TDD at least by the senior devs. Have a small project or library use it strictly. Writing tests first makes some things click in a way they wouldn't otherwise. Once your senior staff has a stronger intuition they can, through code reviews and what not, promote better testing. As this results in more tests, junior devs can learn by reading the code and other tests, and code review will help with the rest.
Working Effectively with Legacy Code is a great book if you want a solid resource. https://www.amazon.com/dp/0131177052/ref=cm_sw_r_cp_api_i_5EghFbWH7G929
The book you are referring to is Working Effectively with Legacy Code by Michael Feathers, a very good practical book I recommend to everyone on my team.
It's an older book, but if you haven't read it, it might be of particular use for you: Working Effectively with Legacy Code
Here's a bit of a summary of it: https://understandlegacycode.com/blog/key-points-of-working-effectively-with-legacy-code/
It's this one.
It's pricey, but I promise it's one of the most valuable books in my collection. Everyone I've recommended it to has thanked me.
The Michael Feathers book “Working effectively with legacy code” has been very useful to me.
If you're dealing with existing code I highly recommend Michael Feathers "Working Effectively with Legacy Code".
I also like the classic Extreme Programming Explained which deals with a number of topics but unit tests is a big part of it. It's how I got my first taste.
While there can be infrastructure to facilitate unit testing of specific tech stacks, the general approach to unit testing is the same no matter what stack you're using.
Start with a thought experiment: what's the simplest way for me to call the code that I want to test? In the case of a servlet, you might instantiate the servlet class, call its init method, call its service method, examine the HttpServletResponse, and then finally call destroy on the servlet instance (because we want to clean up after ourselves).
Essentially, have your test code pretend to be the servlet host.
OK, so why might that not work? Well...
To get untested code into a unit test, you often have to refactor the code. You have to control external dependencies and you might have to change the API in order to make it easier to call from a test.
On my project, we recently refactored some report-generating code. It was wrapped up in the GUI layer, and that made it hard to get into a unit test. We extracted the "guts" of the report into a separate class (it was easy; we had previously started to move in that direction). Then we wrote unit tests against that separate class. This allowed us to control the external dependencies of the report code when run from within a unit test. Instead of gathering data from global "application services", the refactored code instead gets those "application services" as constructor parameters.
So for your servlet example, maybe it makes sense to extract some of the "guts" into separate classes with clearer dependencies. If the servlet currently accesses the database directly, maybe you could extract a different class with a constructor parameter that acts as a stand-in for the database.
I have and continue to recommend the book Working Effective with Legacy Code. It defines legacy code as code without unit tests. It then describes a mental model for introducing tests. It's all about finding "seams" and then leveraging them.
The first maybe 80 pages covers that, and then the rest of the book is a recipe book of "I have this challenge with adding unit tests, what solutions could I use to resolve that challenge?"
There's a specific book about this issue:
I'm told it's good but I can't afford it …
Buy this book https://www.amazon.com/Working-Effectively-Legacy-Michael-Feathers/dp/0131177052
You'll not go wrong doing what he suggests...
Seconded! IMHO there is a widespread misconception that starting from scratch will automagically make everything better. But in reality it won't. Starting from scratch is a huge undertaking and will only lead to the next flawed system. You will never get a complex software right on the first try, some problems are just too big to tackle in one go, just as you don't swallow a 500 g steak in one piece. Decomposition is a very powerful technique.
Refactoring is a technique you will have to employ not once, but many times, even on the same components. But it will transform your code little-by-little to a better system than starting from scratch would have. On top of that it has the big advantage that it won't set you back tremendously, which might kill any motivation or velocity you had.
Btw, I can really recommend the book Working Effectively with Legacy Code, it's an eye-opener.
Working Effectively with Legacy Code https://www.amazon.com/dp/0131177052/ref=cm_sw_r_awdo_navT_a_FTH7MVQHKY1J5A3JFH2V
Refactoring: Improving the Design of Existing Code (2nd Edition) (Addison-Wesley Signature Series (Fowler)) https://www.amazon.com/dp/0134757599/ref=cm_sw_r_awdo_navT_a_Y6053S4FXE2BJXP4MGT4
Haskell is a great language for parsers and compilers for sure.
I would also pick up a copy of Working Effectively with Legacy Code by Michael Feathers: https://www.amazon.ca/Working-Effectively-Legacy-Michael-Feathers/dp/0131177052
While the technical examples in there are geared towards Java the ideas behind it have been useful to me throughout my career.
The main point will be to writing that interface layer, as you say, in Haskell -- but more importantly it will be to writing functional tests which exercise that layer against the legacy implementation and verifying your understanding and expectations. Once you have those tests in place you can start writing pieces in Haskell to replace certain code paths and start exercising the test suite against the Haskell code.
The book goes in to much more detail and is worth it.
Working with Legacy Code is excellent book on this topic - https://www.amazon.ca/Working-Effectively-Legacy-Michael-Feathers/dp/0131177052
Test harness is a way to go.
I mean, it's a small startup, I'm not surprised.
How's the pay and work/life balance, benefits, etc.?
If that's all good then you can probably learn a lot, I know I did when I worked at small startups, and more corporate jobs with better code felt easier for me afterwards.
This book might help you out
Legacy code is the book being nice but it generally helps you refactor shit code
However, if the dumpster fire code makes the rest of the job a dumpster fire.... probably do look for a new job.
So I work at an IoT company and we're stuck on PICs until there's a better supply chain for cheap single-board computers i.e. Raspberry Pi Zeros or ESP32s, but we cannot find qualified firmware candidates for our PIC platform to save our soul. Similarly we struggle to find qualified engineers to maintain our product's back-end because its a 20+ year old PHP codebase. Which a lot of candidates seem to think is a rewrite opportunity never mind that async php-fpm alternatives exist and would allow us to leverage our existing libraries.
But I digress, our problem is that we moved a bay-area company to middle America, and they did not factor in the popularity of our technology stack before doing so. We do eventually find candidates but it takes some time to do so. It might have impacted our ability to grow but we're also trying to stay as small as possible so it's not that much of an issue.
For your situation, I think it'd make sense to give the book Working Effectively With Legacy Code a read, and do what any good scientist would do: dissect your team's product. There's probably some discernible structure but you'll have to get your hands dirty to figure it out. Where I work, the 20+ year old PHP codebase was written mostly by one man with virtually no documentation, so to learn it I took it apart brick-by-brick and built little personal projects using the techniques I learned. It has a hand-rolled ORM, I took it apart and built my own ORM. It has a proprietary views system using a templating engine, I took it apart and built my own. It has an API, I figured out how it worked and how to make it into a standalone package. I'm even going so far as to clone the project in my spare time to gain a broad understanding of the system as a whole.
If all that fails, then find a new job. Because the reality is at a lot of jobs you have to dig and learn much in the way I just described in order to be successful. You might as well cut your teeth learning that process now and then cut and run so you can set your self up to be successful at the next place.
> Refactoring Legacy Code
are you referencing
or Fowlers "Refactoring"
Short answer, get Michael Feather's book Working Effectively with Legacy Code.
> the only solution to have a complete unit test environnement would be to rewrite all the queries into hql/criteria so that unit tests could run on isolated embedded databases scripts right ?
It's better to stick an interface between you Hibernate + PL/SQL + DB stuff and your business logic, so you isolate all that from the application's business logic in an implementation of the interface. Create an Anti-Corruption Layer.
Then, you can mock out the interface, so you can write pure unit tests. And you can write integration tests that test the isolated interface implementation, using something like DbUnit, which can set up database test fixtures with test data specified in XML. You can go further in Oracle by creating synonyms to mock tables to isolate live data from test data.
Michael Features recommends create a set of functional tests first to create a safety net before refactoring, if you're concerned about safety.
I think I know where you are in your understanding of tests, because I think I remember when I was there.
The thing to realize about mocks (and to a lesser extent stubs) is that you use them in a completely different context than you would use an assert-oriented test. Asserts are used to test values, and mocks are used to test interaction. You would use asserts to test that you had implemented sqrt correctly, or that you are correctly converting from RGB to HSV, or that you have computed the correct total for the bill. I can't stress this enough.
A good example would be this: suppose you were implementing a SAX parser. Not the callbacks that the parser calls, but the parser itself - something like javax.xml.parsers.SAXParser. You care how an instance of this class interacts with other instances, and the best way to test those interactions is with a mock. A SAX parser clearly has state, but the clients of the parser all indirectly observe that state by the way in which the parser pokes them... so you should test it in the same way.
> You must not allow any other unrelated code to be executed outside of the tested method.
I disagree very much with this statement. It's fine to invoke code as long as it is at a lower level of abstraction, as long as it is separately tested, and as long as it's still easy to get the system into the state you need. Testing is all about finding (or making) seams, and then exploiting them. Not all seams need to be run-time seams. (For more on this, read Working With Legacy Code (amazon) - as I recall, it's really good).
As other people have said (with unnecessary harshness), applying mocks to all testing does lead to code with more complexity than necessary. I have seen projects destroyed by unneeded complexity all in the name of testing.
I would recommend that every professional developer should read Michael Feather's book, Working Effectively with Legacy Code to understand *why* testing is so important. https://www.amazon.com/Working-Effectively-Legacy-Michael-Feathers/dp/0131177052/ref=sr_1_1?dchild=1&keywords=legacy+code&qid=1603215526&sr=8-1
It's been a while since I've read it, but Michael Feathers' book about working with legacy systems may help when you get to the point in refactoring where you are saying "but how do I add unit tests to this?"
you can skim the TOC here
and it's on amazon as well. (not an affiliate link)
I used that book and some C++ test frameworks to build test scaffolding around a serial-port dialer program before and during refactoring it to clean it up and add support for LTE modems. The program originally supported maybe a dozen variants of GSM/CDMA modems via a bunch of clusters of 10-12 layer nested if statements. (seriously) I was able to re-write it into an OOP inheritance hierarchy for all existing and new modem types. The new version shipped with only 1 defect escaping my development/testing. That defect was mainly b/c I got lazy with testing in the 2nd half of my efforts, and ended up being only 2-3 LOC to fix.
If you're not sure where to start on adding unit testing, or any testing, to existing systems, I think it's a valuable read.
I'd recommend this book: https://smile.amazon.co.uk/Working-Effectively-Legacy-Michael-Feathers/dp/0131177052?sa-no-redirect=1
"Working Effectively with Legacy Code has a really good reputation as a useful tool for approaching projects like this.
Have you read <em>Working Effectively with Legacy Code</em>?
I ask because mundane problems and new solutions do not always go hand in hand. Sometimes, working on the bleeding edge isn't possible; even when it is I've been there correcting Microsoft employees in C#, correcting Google on timezones in Chrome, tooling employees in their own stack, etc. and I don't think that's even what you're looking for. Its more of the same.
I've found grace in truly helping companies refactor, restructure, re-stack and modernize their applications for scale. Be it on mainframe applications or full-stack client-facing web applications, taking something old and completely ripping it out for something new is an art form. It's much easier to try and write your own code, find a problem - hit a wall, search for a solution and copy pasta. It's much harder to understand the problem you're facing, that stems from understanding someone else's code and derive a solution. In the same way it's easier to create something new then fix something old (holding constant creativity).
This is what has led me to the art of architecture, refactoring beyond SOLID and design patterns by the GoF - but in truly coming in to a new company and really understanding their business first then their solutions and the consequences of re-writing all of their applications in a new, automated way.
I'm not your "pixel-perfect" guy. While I see why you focus on the "front-end" being what bores you, I think instead you're bored by cookie-cutter solutions you can even begin to find on StackOverflow. I've traced problems on a mobile web client down to processor implementations. I refer to real problems as greenfield, most likely inaccurately. I find design both consequently and paradoxically crucial and irrelevant to success. You can have the prettiest looking face with no neck, working body and arms or everything working just fine but damn ugly to look at and you'll steer clear. It just might not be what interests you.
It's finding where you fit into the puzzle that matters most - this career is both scientific, relational and calculated as it is the most creative art form on the planet. They say development is one of the most free art-forms one can hope to master. Go figure.
--ramble, ramble, ramble
I recommend the book "Working Effectively with Legacy Code".
Write an interface layer then use modern language and tooling against the interface.
Does the codebase have any unit tests (or similar)? If so, read 'em and understand 'em. Tweak their inputs so that the tests fail/pass. Will help you understand what each component in the codebase is doing.
If there are no tests, try to add one or two to cover the functionality you have figured out. When you get given a maintenance task, write a unit test before doing the work. The test will fail when you've finished writing it, and then your job will be to implement the change(s) that make the test pass.
When finished with the maintenance tas, your unit test acts as a free regression test for your feature, so you know that your change works as intended and is somewhat future proof.
Edit: Also, I have found this book quite useful in the past.
this book may help: http://www.amazon.com/Working-Effectively-Legacy-Michael-Feathers/dp/0131177052.
also, put everything into functions and then unit test all of it (with mocks) using pester.
Read the first review of this, then read the book.
I can't believe nobody's listed <em>Growing Object-Oriented Software, Guided by Tests</em>, by Freeman and Pryce. It's the best software-testing book I've read in decades, plural.
You should also have read:
Those should get you started.
As far as the nitty-gritty RSpec books everybody's been recommending, I'd suggest not limiting yourself to RSpec. After putting a few years into RSpec, I just rewrote the test suite for a small test project of mine in MiniTest::Spec, and the first tyro-level suite ran in half the time. Fast tests in Rails is a justifiably widely-addressed topic but, if you look closely at most of those links, you'll see they address speeding up RSpec.
Everything's a trade-off. Magic is neat and fun when it works, but it can really slow you down.
The conversation is drifting, but the problem you are describing can be vastly eased when using proper methodology.
Downvotes for a book that explains in a very thorough way to how to write better code. Thanks.
Luckily I've got more karma to burn. Here's another excellent book. It's about how to turn bad code in to better code.
No, TDD is not going to help you with legacy code! I'm not saying debuggers are not good tools to have, just that TDD and SOLID can reduce your dependency on them overtime and is surprising how much time you can make up by not using debuggers.
As for testing as a whole, I try and push to keep that achievable in as controlled away as possible, isolating things as much as I can. As an Asp.Net developer, that has been a nightmare until recently!
I'm not coding Ninja, anyone that thinks that are cocks. I've left my fair share of very bad legacy code... I'm still waiting for some of mine to turn up on DWTF! :D
Have you read: Working Effectively with Legacy Code Highly recommend it. There is a flow chart that gives you a good idea of how to chain refactorings together, but I can't find it just now. Will post it if I can find it.
EDIT: Not sure why my original comment is being down voted... That's my experience and your mileage might vary, doesn't mean I'm wrong :(
Edit 2: My google-fu is strong today: http://timhigh.wordpress.com/2008/08/27/legacy-patterns-decision-tree/
If you are trying to get an old code base into shape then buy this book.
I don't think there are new editions, but the ideas in it will never go out of date.