This is really important later. Also, I showed you this in the slides before. I also like to combine spec-driven with behavior-driven and test-driven. So, when you use AI agents and tell them you have an existing function, as an example, and tell them, please write me unit tests, they will normally write evocreen tests. So, those tests are written to fit your existing code. So, those unit tests will never tell you that there is a bug inside your existing code. So, this is why test-driven development works really good when it comes to AI-generated code. So, in this constitution, I tell the agents, okay, when you write or implement later a task, you should write the test first and implement after that. So, you write a solid test, the agent generates code, the test will be executed, and then you can see, okay, is the test passing yes or no? And it's funny to see, because I'm using spec-driven a lot in my projects, you can see often that the implementation fails on the first try. So, it's a really good pattern. Use test-driven development in your constitution.
And, of course, it's really condensed down. There are more rules you can add to this file. It depends on your preferences. And, of course, you can always use AI to ask your favorite LLM to maybe improve your constitution. And after that, it's finally time to start with the first phase, the specification part. This is the only file you really write by hand, because it's the fundament of spec-driven development, because this is the name. This is where the name comes from, the specification. And the specification is about the what and the why, and not about the how. So, it's to define everything, what you like to achieve and how to achieve, but not with which kind of technology you like to achieve your goals. So, you don't write something about, I want to build a backend using NextJS, and the frontend should also use NextJS, for example. Now, you write what you like to achieve and why you like to achieve. And then, you can see here, some outputs are a bit flaky. You can see Copilot, in this case, created five user stories, but named them P1, P1, P2, P2, and P3. So, it's just a broken output sometimes, because this is how LLMs are working.
But in this case, Copilot created five user scenarios based on my specification on creating an authentication backend. And we can also see this specification is also broken down into functional requirements, entities, success criterias, and that's really important edge cases. You should also always think about edge cases, for example, when it comes to authentication, how should later the backend handle a token that is not valid anymore, or the refresh failed, something like that. It's really important to think about edge cases. Then, there's this planning phase.
Comments