Here at Terathink, we are working with a large government agency constructing a content services platform. This platform allows content generated by benefit applications to be shared and reused across the organization’s disparate IT applications. We are doing this through the use of application programming interfaces, or APIs.
Our agile development team manages our work using a Kanban approach, from requirements gathering to the deployment of the API to a production environment. We have honed our use of Kanban to most effectively manage the work required to take a user request to functional reality.
Document Requirements using Spikes
To start our development process, we create spikes to account for requirements-gathering work. Spikes help determine the level of effort needed to deliver needed functionality. The outcome of our spikes is a full stack user story, or multiple stories. These account for the work to actually build and deliver a minimally viable product (MVP) to the client.
In our domain of content management, we have developed a requirements gathering template which we rely on in order to scope our work. Some of the questions we ask from the template are:
- What conceptual business objects are core to the operation of your application?
- Do you currently have a content/business object model? If so, what are the entities and attributes within that model?
- What content is created at which part of the business process, and why?
- What content is retrieved at which part of the business process, and why?
- Which users/roles are permitted to update/delete which types of content?
- Are there changes in content state and status values within your business process?
The answers to these, and other questions, help us identify and document the user stories. These user stories include how client requirements align to our current content model. Additionally, they include whether any changes to that model are implied by the new requirements, and which API methods (e.g. POST, PUT, GET, DELETE) we need to provide the client in order to meet their content needs.
Modify The Content Model
Another outcome of our requirement research spikes is follow-on research spike to update our content model to account for client’s requirements. As our team has been tasked with enterprise-wide content management, we are always managing the complexities of having a robust content model that can accommodate specific and unique business needs while still meeting the needs of the entire organization. At this stage of our process, the main questions we need to answer are:
- What are the key parent business object(s)?
- How do you search and retrieve those objects?
- What content belongs to each object?
- Which business rules and behaviors apply to the object and documents?
These answers drive any client-specific alterations to our content model. This, in turn, undergirds the functional user stories that will drive API development.
Create Client-Facing API Specification
Posting the API specification to Swagger before coding the API has the following benefits:
- Development and Testing Teams: Understanding of the intent of the API and its attributes. Testers can use the spec to develop test scenarios in concert with the development team and the analyst following Test Driven Development principles.
- Client Team: Understanding the API and its capabilities. The client-facing API serves as a contract between the client and the development team. When the clients calls this API, they should expect the outlined behavior.
Define System APIs supporting the client API
Next, we create a story for our content management developer to create a series of internal or system APIs. Again, this uses the Swagger interface. We do not expose the system APIs to the client for use. Rather, our development team uses them to build the business-tier microservices.
Create And Execute Full Stack User Story To Code And Test API
After the team gathers client requirements, updates our content Model, builds and publishes the client-facing API spec to Swagger, and defines our system APIs, we are now ready to build out the API. This allows the client to test out the API on our public Swagger instance with live data and validate that the response received from the API is as expected.
Our full-stack user story takes a layer cake approach. It will generally include subtasks to accomplish the following:
- Coding needed to allow our content management application to expose our content repository to our enterprise service bus (ESB) through RESTful APIs:
- Code webscript framework for the applicable API;
- Code webscript handler for the applicable API; and
- Deploy and test API handlers
- Review of design artifacts (i.e., the client-facing API specification) by our enterprise service bus/integration framework developer, with follow-on subtasks to build ESB logic to implement and unit test any business rules implied by the public-facing API spec.
- Build test scenarios (defined prior to the start of development as mentioned above in accord with TDD principles). Then, execute same for “happy path” functionality as well as error handling/validation scenarios.
- Test the API from end-to-end (i.e., submit a request to POST/PUT/GET/etc. content via the public-facing API and evaluate the response as that request travels through our ESB layer and content management application, and back to the user.
As the team completes the subtasks on the user story, we change the story assignment to the team member with the preponderance of outstanding work still remaining.
Testing Is Critical
When the team finds issues during testing, they make a determination as to whether the issue(s) are able to resolve quickly. On our project, resolving quickly means generally no longer than one day of development and re-testing. We also determine if the issue, if left unfixed, will prevent the end users from using the API as intended.
If the issue is able to resolve quickly, the developers will resolve it and return the story to our tester. If the issue is not able to resolve quickly but does not have a material impact on the API’s “core functionality” the tester will create a bug and place it in the backlog. Later, a team backlog grooming session will prioritize the issues. Regardless of the LOE to fix the issue, if we determine that it does in fact have a material impact on the API core functionality, we ensure that the issue is resolved within the user story before moving the code to our integration and user acceptance testing (UAT) environment.
Assuming a successful deployment to UAT, we will demonstrate the new API functionality to our client. Then, we release the API “into the wild” for them to test against. The team evaluates any issues reported by users, and if need be creates new bugs in Jira to resolve them. After successful UAT testing and resolution of any major issues, the API moves to Production in our next release.
Driving Towards Success
Developing APIs in an agile environment can be a challenge. However, it is our experience that using Kanban (as opposed to scrum) mitigates some of the risks that arise from serving a range of users within the same organization. From the first research spikes to the completion of the full stack story, the entire team stays engaged. Additionally, the team is able to respond quickly to client-reported issues regarding functionality.
Developing the client-facing API spec first (aka the “API First” approach) ensures that the development team is driving towards a client-defined requirement. It has served us well on our efforts to deliver effective content services.