In January 2020 I took part in Udacities Fullstack Web Developer Nanodegree Program. The course takes 4 months to complete and offers a deep insight into the world of fullstack web development. I’m going to introduce my final project which is a database-backed web API written in Python.
The course is structured in four big topics.
- SQL and Data Modeling for the Web
- API Development and Documentation
- Identity and Access Management
- Server Deployment, Containerization and Testing
After each you have to complete a coding project which you upload on GitHub so that it can be evaluated by employees of Udacity. If you want to take a closer look at the program follow this link.
In this article I introduce my final capstone project. It is called the Boulderlibrary API and is meant to be a huge data pool of climbing gyms worldwide. With this repository I managed to graduate as a Full-Stack Developer at Udacity.
You want to speed up and you can’t await to see some code? Feel free to visit the project directly on Github.
The Boulder Library API
As a passionate climber the aim was to create something that has to do with my hobby. So I decided to develope an API that lists climbing gyms in different countries and enables climbers to follow their favourite gyms.
I used the following technologies to create the app:
- SQL Alchemy ORM
The Boulderlibrary API can be described as a simple CRUD-App that allows administrators to create, update and delete new climbing gyms. Users can read the data and add new gyms to their favourites.
For the server development I used Flask which is a popular framework for developing micro services or small web servers. The app interacts with a Postgres database. SQL Alchemy ORM helped me with the data modelling as it avoids writing raw SQL. Instead you use Python code to access the tables.
Further tasks will be to develop a Frontend for the API. I think I will consider a React App to create the user interface. Also I have to find a solution how to gather all the data. Yet the database consists only of mock data. Maybe some scraper is able to get the relevant information. We will see.