You are probably familiar with the term “Web3”, which many refer to as the next stage of the internet. Around the tech industry – and in many others – exploring new alternatives is always tempting. With something as exciting as Web3, it is almost like Garfield trying to keep his hands away from freshly baked Lasagna. Simply seems impossible.
All hype or actually useful?
It is essential to ask whether this new, shiny tech solves a problem for you or if you are making your business model more complicated. While today’s hype can easily be the future of tech tomorrow, your business might be better off sticking to the traditional ways of the internet. Therefore, we always encourage our peers to look at Web3, blockchain, etc., as a platform to solve customer needs rather than pinning it as a solution before fully getting to the core of the problem.
Our recommendation will always be to identify the need based on user insights and then look at the tech and platform solution.
The idea around Web3 is that it is based on decentralized peer-to-peer networks taking place on the blockchain, where applications are distributed across computers participating in a specific network. A significant amount of the spotlight comes from the sphere of crypto, NFT, and the metaverse, consistently bringing headlines for all the right – and the wrong – reasons. However, by looking at blockchain technology and what this can do, we begin to uncover real-world problems where blockchain can solve actual needs.
We have done this with Kollektiv, an endurance sports training platform leveraging deep-tech to help world-class coaches create personalized training plans for athletes like you and me.
Providing total ownership to personal trainers
When we first began talking to Kollektiv, they came to us with a revolutionary vision to democratize personal training and make excellent coaching available for everyone. They had identified a need to break free of traditional personal training platforms such as social media and instead allow trainers to form their unique communities. One, where pro athletes’ followers could benefit directly and exclusively. The pro athletes act as coaches and own their community by doing so. In this case, blockchain turned out to be the obvious choice of technology.
Leveraging blockchain to challenge real-world problems
We concluded on tech stack during a discovery workshop in Copenhagen, bringing everyone together for three days of problem identification and solution exploration.
Fundamentally, we wanted to dive into how to solve three core needs.
Empower personal trainers and provide 100% ownership of their community
Sustain the incentive to keep athletes on their path
Ensuring transparency by enabling proof of pro-athletes coaching abilities
Kollektiv’s Discovery Workshop @Lab08’s office in Copenhagen
As mentioned, we discovered that the blockchain would provide us with the right “out-of-the-box” technology to meet all our requirements. For Kollektiv, blockchain and Web3 went from hype to a problem-solving platform.
To empower personal trainers and transition them from pro athletes to coaches, we leverage the power of Decentralized Autonomous Organization (DAO) to help them form their personalized communities. The DAO allows complete ownership and enables them to build and manage their communities in a democratic and censor-free fashion.
Athletes will be granted a utility token when getting access to the network. The token allows several functionalities outside of simply being your ticket to play. It obtains voting rights for the DAO network and enables you to purchase community-related offers and services. Additionally, we have implemented a new method of motivating clients by establishing a “train-to-earn” token as an incremental payment system. The setup allows users to earn money back as they progress in the training plan. The more you train, the more you earn back on your initial cash commitment.
To validate the coaching skills of pro-athletes, the “train-to-earn” token works as a decentralized ledger to support past performance and experiences with the individual trainer. It serves as an anonymized and immutable database containing all athletes’ prior performances in the trainers’ unique community. This makes it possible for new potential athletes to make informed decisions based on past results, completion percentages, etc., which makes picking a personal trainer much less of a shot in the dark.
Continuing down the path with Web3
Together with Kollektiv, we will continue to explore, define, and validate the individual functions and capabilities in the Web3 solution and how these will interact. We will be focused on enabling the DAO communities to flourish and leveraging the “Move to earn” token in the best possible way, e.g., adding further utility to the token by making it redeemable for physical goods and services relevant to athletes.
If you want to learn more about Web3, blockchain, or how we might be able to help you understand more about the topic, please reach out. We’re happy to discuss this further over a coffee to see how we can help address your needs.
Lachezar Blagoev is the Head of Product Management at Lab08. His responsibilities include defining product roadmaps, managing backlog, and coordinating development efforts in order to ensure that milestones maximize the value we bring to all of our customers
He is acting as the link between customers and business by representing the user’s perspective
Lachezar has made essential decisions regarding all aspects of a product strategy including but not limited to UX, technical approach, business purpose, and compliance with regulations
Be sure to follow us on social media to receive updates about other similar content!
When setting up Lab08, our cornerstone was that we wanted it to be more than a traditional outsourcing business. Rather than simply developing products, we wanted to transfer knowledge on scaling innovative software ventures to our clients to reach the next level. After all, this was deeply embedded in our DNA as entrepreneurs.
If you are familiar with Lab08, you know that we are big believers in simplicity. We want to solve the problem, causing the user pain. To do so, we always go through a thorough ideation and validation process before we eventually start building products. Think of it as laying the concrete before driving the road. We take this approach to avoid costly errors when building products – making sure that we have the right insights and a streamlined approach to product management. There is nothing worse than ending up with no fuel in your tank when you have arrived at the wrong destination.
In this piece, you can read more about our journey from ideation, where we let everything fly, to implementation, where we go in-depth with the best solution.
Working through ideas at a pace
Once we pick up working on a new product, we go through the ideation phase, which is where we let it all out. The good, the bad, sometimes the ugly. This is an exciting phase, as we generate tonnes of ideas that we can filter, cut, and discuss. Ultimately, we slice away the sub-par ideas and move ahead with the ones that have substance. Fundamentally, this stage is about defining solutions that can solve the user’s basic needs.
To get this right, we conduct various qualitative and quantitative interviews with internal stakeholders to ensure a controlled environment for ideation, we use various techniques to keep us from spiralling in a hundred different directions. Our first approach is called breadboarding. Developed by a guy named Ryan Singer, we use his method to identify all the essential components a solution requires. Then, we visualize the connections between all of them. Getting everything mapped out allows our team to swiftly talk through the different aspects and decide if they fill the user needs. With breadboarding, there are three basic elements we cover: places, affordances, and connection lines. Places are elements to navigate – like sites or menus. Affordances are user interfaces such as buttons or other actions. Lastly, connection lines indicate how users navigate from place to place using affordances.
Breadboarding example
However, sometimes mapping out an idea requires a visual aspect. Enter good old fat marker sketching. A technique often used here at Lab08. We wish this were merely a snappy name for a fancy method, but this is precisely what it reads. We find the freshest Sharpie in our office and start illustrating big picture concepts on paper. Our approach with fat marker sketching is to agree on a direction before going into the nitty-gritty. Such a sketch provides just enough context and details for a designer to move forward with it afterwards. Plus, we always feel a notch more important when we draw on the big billboard in front of our colleagues.
Fat marker example
As you may remember, we are all about products that address basic needs, and we use these techniques to help us identify the best possible solution to run with. Navigating simple visualizations and mapping out products provide us with the necessary information on how to navigate features and functions that are easy to implement. Additionally, we can identify how to scale when requiring a more comprehensive solution that includes performance needs and a few delighters.
Once we feel happy with our idea short-list, we work to validate them as real solutions. The first step is to offer prototypes or mock-ups to end-users, with who we have a close collaboration and trust. The next step is basically a lab study, where our Product Owner observes how our test pilots engage with the prototype. It is a big chunk of touch and go, trial and error here, as the PO proactively searches for ways to improve the product solution.
Our objective is to address the critical pain points and basic needs before developing the actual product. By monitoring how people work with our solutions, we gain first-hand insights into what works and needs improving once we start implementing. It is our take on a qualitative study that gives tonnes of valuable data.
Once a specific solution is deemed solid and successfully fulfils a user’s need, we move forward to defining the minimum viable product, the MVP.
Securing smooth implementation
When we implement solutions at Lab08, we want to simplify and break them down as much as possible. Therefore, we always highlight our user stories and technical objectives that must be fulfilled to generate real-life user value.
We do this to make it easy for the development team to understand who the key user for the product is and what the success factors are, while ensuring that everyone is on the same page. With this, we enable quick navigation in the needs we are looking to solve.
In the early implementation phase, we make sure that all success criteria are detailed and scripted so our Quality Assurance team can complete thorough tests as we move along. Additionally, we start out grooming our backlogs. We do this as we will have numerous tasks and user stories sitting in the product backlog when we start the build. We need to ensure everyone is on the same page with every task before the individual sprints kick-off. This provides a homogenous understanding of the different tasks in front of us.
Once the actual development begins, we work with different approaches to meet our desired outcome. We typically work with SCRUM on specific tasks set to meet the pre-defined user stories and needs. Here, we work in sprints with a designated time for each job, strictly focusing on our end goal. When our set-up is more free-flowing with numerous moving parts, like when we build MVPs, we take the KANBAN approach, as it allows for more flexibility, and we can follow the progress on the board.
From here, the Product Owner takes on the final stage of the actual software development, User Acceptance Testing – UAT. The PO provides end-users with the possibility of real-work testing to see how our product solution takes on the needs and tasks it was designed to address. In this final stage, the PO observes how users navigate the products or new functionalities while ensuring possible feedback to the product team.
If all checks out, we go live.
Evaluating for future improvements
Once a product is in the air, our work does not stop there. We continuously follow and track our solutions for any future enhancements, developments, or features to drop that can add value to the user experience.
We do this, as we want to keep being focused and agile on challenging our clients’ pain points and meeting their needs. This is the Product Owner’s job, as they need to keep this in their scope once the first product has been delivered.
If you enjoyed learning more about the Lab08 process from idea to implemented product, make sure you also check out how it all starts – with a product discovery team. You can also keep an eye out for future articles. We will drop small bits and pieces on our approach to product development. Maybe we touch upon just the right topic that piques your interest?
Lachezar Blagoev is the Head of Product Management at Lab08. His responsibilities include defining product roadmaps, managing backlog, and coordinating development efforts in order to ensure that milestones maximize the value we bring to all of our customers
He is acting as the link between customers and business by representing the user’s perspective
Lachezar has made essential decisions regarding all aspects of a product strategy including but not limited to UX, technical approach, business purpose, and compliance with regulations
Be sure to follow us on social media to receive updates about other similar content!
When I started at Lab08 as a Technical Lead, my first project was a greenfield project – every programmer’s dream. You don’t need to clean other people’s code, and no depreciated and forgotten technologies. A dream come true for everyone, but especially for a guy who spent his last two years fixing legacy code.
The tech
So, long story short, we gathered the team and discussed what technologies we would use and need for such a project. After some internal brainstorming, we ended up going with:
NX – for managing a mono repo
Angular for all the front-end apps
NestJS for the API
MySQL for databases
MongoDB for event logging, messages, notifications, and others
Redis for cache and message broker for queues
The teams here at Lab08 are pretty fond of NestJS. It is a simple, Express-based, robust, and extendable NodeJS framework. Also, it lets you prototype things fast, has a growing community, and build-in modules or third-party libraries for most stuff like integrations with Redis, different types of ORMS, MongoDB, WebSockets, and more.
Another good feature of this framework is its reliance on module and dependency injection containers, as Angular heavily inspires it. This makes code separation and general separation of concern relatively straightforward.
So, after choosing the tech, the next step was… *drum roll*
The infrastructure
The obvious next step was to set up the infrastructure.
Here at Lab08, we use AWS for almost all products we build with our clients (with a few exceptions), so naturally, we were set out to use S3 buckets to store all static files like images, pdf’s, zip files, etc.
So, now that we’ve begun the boring part.
The nightmare
Did you know that NodeJS has a lot of libraries – like an insane amount? You can find packages for almost anything, but then comes the research. Do you have to find out if the library is actively supported? Is the library stable? What version of NodeJS does it support? Are there any vulnerabilities? Are there a lot of issues? When was the last update?
Here, we came across a problem. Since NestJS is a somewhat new framework, there are actively maintained packages for the most common issues, like ORMs, WebSockets, serialization, emails, caching, etc.
However, when you need something specific, there is a high chance that you may not find it easily.
The search began. After a few hours of digging, we found a couple of NestJS packages and started exploring them. Unfortunately, we quickly found they were not well written. They were limited and not actively maintained (worked on an older version of NestJS, used the old AWS SDKs, or had a lot of open issues from months ago).
After researching, we found that we don’t have a package that fits our needs, so we decided to use the AWS NodeJS SDK v3 and create a simple wrapper to call it a service. We started a few simple methods that wrapped the AWS SDK and allowed it to swim with the rest of our code – all seemed well at this point. Magnificient.
One bite at a time
After some time, I was having lunch with a Lab08 Tech Lead from another team one day, asking me about implementing certain aspects of our system. One of these questions was about the connection with the S3 buckets.
When he started on another project at Lab08, the active package for AWS was still in v2, and he wanted to migrate to a bigger version. As such, he wanted to know if there were a lot of breaking changes. After a brief chat on the topic, we headed back to the office and discussed how our project module was implemented, what problems we encountered, what could be improved, etc.
As with all pieces of code, bugs are inevitable (disguised as unexpected features if they pop up while you are doing a demo, you know). Sometimes, updates lead to breaking changes that must be addressed. So, since most Lab08 projects share the same infrastructure, all the teams spend time maintaining a different implementation of the same module, which is required for every NestJS based project. Another essential thing to note is that the amount of time supporting this would only increase, as new projects would also have such dependencies, so something had to be done.
Open source, baby
After identifying a critical cross-dependency between projects, we decided to make our lives easier and create an official library for NestJS. As we all know, most of us programmers are pretty lazy, so we want to be able to contribute and expand a common library instead of maintaining something separately. After a quick discussion on our fundamental needs, we made a repo and paused the music.
On the first day, there was the repo
The first thing we created was the repo, and we gave it a catchy name:
We decided to simply name it nestjs-s3. Straightforward but classy. Now it was time to set up everything. I had forgotten how boring it was to set up linters, git hooks, formatters, and write install instructions. Anyway, I thought: ‘No worries,’ as I knew the fun part would come when we started writing the code.
On the second day, there was the version crisis
Writing an open-source package for a popular framework is cool, but there can always be hiccups. In this case, while discussing how the module should be implemented, NestJS released its newest version, 8, which came with some minor breaking changes.
Oh well, it happens, right? After a quick discussion with the team, we decided to target the latest version of the framework and release the package as version 2.0.0. Later, we create a separate branch for version 1.0.0 that will support versions 6 and 7 of the framework. Crisis averted!
On the third day, there was the module
The fun part began as we actively made sure to make the module abstract and decide what and how to configure it. What we ended up with was a pretty simple configuration. The module can be set up in two ways – the so-called static way of using promises. It looks something like this:
We had a way to initialize our module and have an instance to the AWS S3 client in our dependency injection container.
On the fourth day, there were the buckets
At Lab08, we have steered away from ALBs in favor of using AWS NLB (Network Load Balancer). You might know thatS3 operates by storing objects into buckets (for those not familiar with S3, you can think of the bucket as a folder), so that was what we were targeting next.
Here was the first significant brainstorming in the team. What should we cover? Should we go full cover or go slowly if needed and not overcomplicate the service? After some discussion (and some beer), we went with the more minimal approach and implemented only the most used commands for creating, listing, tagging, and updating some configurations. (You can see the complete list here: https://labo8.github.io/nestjs-s3/api/classes/BucketsService and https://labo8.github.io/nestjs-s3/buckets-service)
After we patted ourselves on the back, we had a cool new service for managing buckets. One we could simply inject anywhere we needed by:
import { Injectable } from '@nestjs/common';
import { BucketsService } from '@lab08/nestjs-s3';
@Injectable()
export class MyService {
public constructor(private readonly bucketService: BucketService) {}
}
After finishing all our services, we reached the fun part – seeing our creation in action.
We created a simple console app that will let us manage buckets and upload files directly to AWS. Since we had covered everything with tests, it all worked like a charm, and you can find it here:
On the sixth day, our worst nightmare – documentation
All was good and fun until we had to write the documentation, and boy, oh boy, we programmers ‘love’ writing documentation. After some time discussing the need for (or lack thereof) documentation, we created an instance of Docusaurus and started writing.
After a long and tiresome week of coding, designing, discussions, and dreadful documentation, our baby was ready to be shown to the world. With fingers crossed, we wrote the magic words git push origin master, and our idea became a reality.
You can almost hear the champagne popping as you read this, right?
Martin Andreev is a Technical Lead at Lab08. He handles the technical design and development of complex software solutions, coaching and supporting developers, and conducting technical interviews.
Be sure to follow us on social media to receive updates about other similar content!
*This article was written by a former DevOps Architect at Lab08 – Atanas Dimitrov*
Previously, you have been able to catch our approach to product management at Lab08, digging into discovery teams, needs covering, and MVP implementation.
We shift the focus to shed some light on our DevOps stack. More precisely, how we enable scalable and high availability infrastructures for the products we build. In this article, you can find some tips and tricks that we use in our setups, as they leverage us to deliver the optimal experiences for our software users.
At Lab08, we cover the entire spectrum of product development. We can do it all from covering product management and architecture decisions through coding and setting up infrastructure to deployment and monitoring. From a technical point of view, such projects are primarily web-based platforms. That being said, our focus in this piece will revolve around handling the request/response pathway from end-consumer devices – such as a browser on a laptop or mobile app – to the application and reversely. When we work with clients, we consistently set them up in AWS’s public cloud system, as we feel it provides the soundest foundation and infrastructure alternative available.
In the illustration above, you can see a simplified design of the AWS infrastructure, which consists of three main components. 1) AWS’s distributed network, 2) AWS Network Load Balancer, and 3) Nginx or Openresty on an Autoscale group of EC2 instances.
AWS distributed network
At Lab08, we want to ensure that we provide our customers with sustainable infrastructures that have the right balance between solid performance and reasonable cost. This is why in our AWS setup, based on the specific needs, we are using either the AWS Global Accelerator or the CloudFront services to deliver high-speed and secure products, thus connecting the web platform and end-users in the best way possible. Cloudfront is used when accelerating SPA(single page application) frontend applications hosted in S3, accelerating APIs, applying Cloudfront functions at the edge, enabling web application firewalls at the edge, IP limits, content security headers, or when leveraging DDoS protection. Reversely, Global Accelerator is preferred when a static IP is required or when the traffic is not http/s.
Not mentioning the apparent usage applications of Cloudfront, we use it for a few specific actions:
Routing
The AWS Network Load Balancer (NLB) is applied over the Application Load Balancer (ALB) for reasons which will be clarified a bit later. This type of load balancer, being a layer 4, is missing a couple of features found in the ALB, for example, the Layer 7 routing, which is based on hostname/URI path. By adding Cloudfront in front of the NLB, we cover the routing at the edge by defining different Cloudfront behaviors targeting different origins. We also apply some complex routing using cloud functions and lambda@edge when needed.
Origin Protection
To avoid bypassing Cloudfront and hitting the Origin directly, we use the “Custom header” option on the Cloudfront Origin section. Cloudfront allows us to inject a custom set header on every request towards the Origin, allowing the Origin to apply policies based on this header. For example, if we set “my-custom-header=12345” and at the autoscale group of nginx/openresty nodes, behind the NLB, we apply a simple nginx map and block so that if a request doesn’t contain this same header with the previously set value, the traffic is blocked.
# is_bypassing_cloudfront is set to 0 only when the header my-custom-header is found and equals 12345
map $http_my_custom_header $is_bypassing_cloudfront {
default 1;
12345 0;
}
….
#Then into the server or location nginx section
if ($is_bypassing_cloudfront) {
return 444;
}
An important note is on the nginx/openresty’s config. The example defines the header name with “-” in Cloudflare’s option, and simultaneously, it must be defined with “_” in nginx- like so – “my_custom_header,” and not “my-custom-header.”
While 444 is not a standard response code, nginx instructs the proxy server to close the connection without any response, which is excellent when dealing with malicious traffic.
AWS Network Load Balancing
At Lab08, we have steered away from ALBs in favor of using AWS NLB (Network Load Balancer). The reason is that the performance benefits of NLBs outweigh the alternatives. When massive traffic spikes are on the horizon, there is no need for “pre-warm/scale” actions. We work on projects where traffic comes quite unpredictably, why the NLBs excel. Additionally, the static IPs of the NLBs are also quite helpful. However, there are still some elements that the NLB is missing.
Layer 7 Routing
cannot route based on the Host header or through the URI path at the NLB layer. However, we can easily overcome this challenge by routing this as a different stage – e.g., the Cloudfront layer. There are no Layer 7 capabilities for the different services or microservices, so we simply use other listeners on the same NLB. Each listener represents a separate microservice that operates different ports on the NLB, can use various certificates, and has a different target group of autoscale nodes that serve multiple services. Lastly, routing at the Origin’s nginx/openresty stage, on the Amazon EC2 instances, behind the NLB.
There is no security group,
so setting security access at the NLB is impossible. We have a few options to overcome this. Either through A) Cloudfront layer’s AWS WAF – web application firewall – with IP sets or B) Origin’s nginx/openresty. In option B), we do a simple bash script set as a cron job using the AWS’ CLI tool to discover the IPs of the EC2 instances, tagged with the correct key/value, which needs to have access to the local EC2’s service and adding those IPs into a nginx map. Every EC2 instance has the appropriate IAM(Identity and Access Management) policy to allow queries for those queries.
Assuming we want all instances with the tag “Role = microservice 1” to reach our service through a public subnet NLB. Using the below script, we can generate an openresty map, setting the right access:
#!/user/bin/env bash
# Here, we define the EC2 tag to gather the list of nodes we want access to. Let's call the key "Role" and its value "microservice1"
ROLES=microservice1
# pick the region from the instance-data
REGION=$(curl -s http://instance-data/latest/meta-data/placement/availability-zone | sed s/.$//)
# generate temporary nginx geo map with the discovered IPs of all
# running instances with tag Role = microservice1
aws ec2 describe-instances --region ${REGION} --filter Name=tag:Role,Values=${ROLE} Name=instance-state-name,Values=running --query 'Reservations[*].Instances[*].[PublicIpAddress]' --output text | sed 's/$/ 0;/; 1s/^/geo $non_autoscale_net {\ndefault 1;\n/; $s/$/\n}/' > /tmp/no_autoscale_net.conf
# check if the list has changed and if so reload the proxy
if cmp -s /tmp/non_autoscale_net.conf /etc/openresty/conf.d/non_autoscale_net.conf
then
exit 0
else
cp /tmp/non_autoscale_net.conf /etc/openresty/conf.d/non_autoscale_net.conf
/usr/bin/openresty -t && systemctl reload openresty.service
fi
This script generates a nginx/openresty map, where the variable non_autoscale_net is set to 1 if the source IP of the request is not from the autoscale group of instances, tagged with Role=microservice1, like for example:
This map is subsequently used in the nginx/openresty’s vhost server or location section to block all that’s not coming from the allowed IPs:
If ($non_autoscale_net) {
return 444;
}
This way, no matter what autoscale activity executes, we’ll immediately have the proper access list.
Nginx or Openresty
Perhaps it is already clear that nginx/openresty is a crucial part of our request lifecycle path. We chose Openresty when Lua code is involved in a particular use case. Other than the most popular usages, such as serving static content, rate limiting, reverse, or FastCGI proxy, we use it also for traceability.
Traceability
To maintain a complete view of the request’s workflow, we need traceable information in every request/response, which we can later correlate with other logs, such as those generated by the applications. As nginx/openresty is always in the path of every request, here’s what we do:
# set trace ID if it isn't already available in the request. Here we use the nginx's internal request_id variable, which has 32 chars, same as the default B3 TraceID, so set the managed_x_b3_traceid to be equal to nginx's internal request_id, if x_b3_traceid hasn't been placed already, and just copy the value of the x-b3-traceid header into managed_x_b3_traceid variable if the request has it already.
map $http_x_b3_traceid $managed_x_b3_traceid {
"" $request_id;
default $http_x_b3_traceid;
}
However, what if we need a 16 char tradeid over a 32 one? In such a case, we use the previous map to generate a fresh one with just the first 16 bits of it:
We use custom nginx/openresty logs that include those
$short_x_b3_traceid and $managed_x_b3_spanid
then have them shipped with filebeat to a centralized ELK stack (Logstash, Elasticsearch, and Kibana). This way, we can log correlate later in kibana those IDs with the IDs from our application’s logs.
When choosing the right block section to set proxy set/add_header definitions, keep in mind that if you add headers in the nginx’s server section and then add headers within a location block within that same server section, the add_header statements outside of the location will not be applied – only the ones within the location will.
Combining nginx maps is an elegant feature, and we use it frequently. What we also found very useful is the Lua extent. A great example of a use-case is in our non-production environments, serving an arbitrary branch of service. We work with it by distributing our applications as docker images. Suppose we have several branches of the same application operating on the same host. In that case, we simply divide those services by binding those docker containers on a port, which is a simple function of its branch name as a parameter. For example, let’s take an application called “API,” and on the EC2 instance, we have a docker container running branch “testing.” This container would be run on port 3766.
What’s the logic behind that port number? We calculate the sum of unicodes of each char within the branch’s name and add it to a base number (default to 3000, otherwise using argv[2] if set).
#!/usr/bin/python
"""
Usage:
Positional Arguments
Argument 1: branch name
Argument 2: starting port, if not set default 3000
"""
import sys
if len(sys.argv[1:]) > 2 or len(sys.argv[1:]) < 1:
print("Usage: Argument 1: branch name Argument 2: starting port, if not set default 3000")
sys.exit(1)
SUM=0
word=sys.argv[1]
starting_port=int(sys.argv[2]) if len(sys.argv) >=3 else 3000
for i in range(len(word)):
SUM += ord(word[i])
print(SUM+starting_port)
So when we start a new docker container, for which we use ansible roles, we first calculate the port and use it to present the docker’s service to that same port on the host. This way, we can have multiple branches of the same service/application working on different predictable ports on the same EC2 instance.
We must match each request to its corresponding docker container in the reverse proxy. For example, requests to https://testing.api.projectabc.com must be proxied to localhost:3766, which is where Lua comes in handy. In our nginx’s upstream, we have not a fixed port but rather a calculated one. In the example below, we can also use Lua’s balancer_by_lua_block to have a simple retry over an array of upstreams:
upstream api_proxy {
server 0.0.0.1 fail_timeout=3;
balancer_by_lua_block {
local upstream_servers = {
"127.0.0.1",
-- REPLACE,
-- REPLACE,
-- REPLACE,
-- REPLACE,
-- REPLACE,
}
local balancer = require "ngx.balancer"
local host = upstream_servers[math.random(#upstream_servers)]
local port = 3000
local string_lenght=string.len(ngx.var.branch)
for i=1,string_lenght do
port = port + string.byte(ngx.var.branch, i)
end
if not ngx.ctx.retry then
ngx.ctx.retry = true
ok, err = balancer.set_more_tries(5)
if not ok then
ngx.log(ngx.ERR, "set_more_tries failed: ", err)
end
end
local ok, err = balancer.set_current_peer(host, port)
if not ok then
ngx.log(ngx.ERR, "failed to set the current peer: ", err)
return ngx.exit(500)
end
}
keepalive 64;
}
In the server section we have:
server_name ~^(?<branch>.+).api.projectabc.com
Nginx will match the Host header and will create a variable “branch” =” testing,” which will then be used to calculate the port number for the upstream and because we use the same calculation in Lua that we used in the Python’s code, the request will hit its relative docker container:
local string_lenght=string.len(ngx.var.branch
for i=1,string_lenght do
port = port + string.byte(ngx.var.branch, i)
end
If you have made it this far, you now have a good idea of how our request lifecycle works. We have developed it to be efficient, quick, and use the least number of elements in the process.
We hope that you enjoyed getting a glimpse of DevOps here at Lab08. If you have any questions, comments, or something interesting from reading this piece, you can always reach out to us. We are always happy to discuss further with like-minded people.
Atanas was the DevOps Architect at Lab08 from 2018-2023. He used to handle the setup, maintenance, code deployment, security, troubleshooting, backup, monitoring, and incident reaction for all of our customers’ infrastructure.
His role was to ensure the stability and efficiency of all the platforms that we create together with our clients.
Be sure to follow us on social media to receive updates about other similar content!
Our last article covered how Lab08 uses product management as a value-adding tool to help ventures build intelligent software solutions. If you missed it, you could have a go at it by clicking the link here. This next piece will dig a little deeper into how Lab08 develops MVPs – minimum viable products.
If you know about Lab08, you know that we are big believers in simplicity. We want to cover the pain points, making us a bit less interested in chasing shiny add-ons that make for a buzzword-worthy marketing presentation. Fundamentally, our methodology supports value-adding requirements that cover needs, something we rarely see in software development today. It is easy to get lost and use your resources running after the unnecessary, why this article will give you a look into how we work with a laser-sharp focus on covering core needs – ultimately limiting time and money spent by our clients.
Getting the users on the field
To set the right direction from the get-go, we need to map the strategic goals we must meet to succeed. To do so, we start by identifying the target segments and verticals and understand how they look at success. What does it take for them to succeed and what KPIs are essential to reach?
To get this right, we conduct various qualitative and quantitative interviews with internal stakeholders to understand the key jobs to be done. Doing the qualitative interviews helps us map the underlying user needs from different areas in the organization, while the quantitative surveys allow us to start quantifying the relevance.
Once we have insight into the internal drivers, we look towards the external factors. What does the competitive landscape look like, and what are the market conditions in the category? This gives us a new angle that can help support or bust our early hypotheses.
As we uncover needs in a business, we get a rather extensive list. It turns out, when people get going, they typically have a lot of stuff they would like to add. One of our critical tools for success in developing MVPs is securing proper prioritization of these needs. Fundamentally, we need to identify the Basic, Performance, and Delighter needs per the KANO model.
Prioritizing features using the KANO model
At Lab08, we are all about helping our clients build dynamic but straightforward and value-adding software. We have a passion for driving impact and business value, which allows us to come aboard some fantastic growth journeys in the process. If we are to ensure simplicity and scaleability in our products, the KANO model is an absolute must for us to scope the must-have features and prioritize the needs from the wants.
The model is a prioritization framework designed to help product teams – such as ourselves – rate initiatives. The different needs that have been explored with the users will be classified into three different categories: Basic, Performance, and Delighters.
It is essential to note that needs are dynamic. Needs will always be seen differently from venture to venture, depending on the required outcome of the software and the people we try to serve. Furthermore, needs can also be fluid. Needs that were Delighters years ago are suddenly considered to be Performance – or even Basic – needs today. Think of the ability to take pictures on your phone. What a delighter it was on the Nokia 7650 back in 2001. Now, it is an essential feature for smartphone shoppers.
Basic needs are must-haves. They are the ticket-to-play, as they are required to start using the product eventually. The Basics are often not the shiniest attributes, but they need to be done exceptionally well for the overall experience. As such, the Basics are a treasured part of getting the MVP just right, as it must carry the total weight of serving core needs.
If you compare it to the auto industry, it is the equivalent of installing a seat belt. People are not interested in buying cars without seat belts, as it covers a fundamental need for safety. The lesson here? You need to cover the absolute basics to even get in contention with the target user.
Secondly, Performance needs are essential to serving, as they ensure that the product provides a good user experience. They are not considered business-critical, but they represent a crucial function in making the product easy and intuitive to use daily. If we turn towards the auto industry once again, a performance need would be a sound system. It is typically developed through another company, but it is integral to the user experience.
Lastly, we have the Delighters. You love it when Delighters are present, but you would not have noticed if they were not there. With cars, delighters are the things you least expect when you arrive at the dealer, like the ability to parallel park through AI or even the self-driving ability – that would be pretty cool.
If you do Delighters well, you get the “wow-effect,” where users become promoters and love to work with the product. However, these are not fundamental to the business or the value we try to add. Therefore, we only add Delighters if they are low-hanging fruit that is efficiently implemented. We want to build simple, not getting complex and bloated products.
It is easy to run and chase the shiny new attributes that create the “wow,” but these must co-exist in a hierarchical structure. The foundation consists of the Basics that make it robust and serve the core requirements, why you need to fill the Basics first. If you run straight to the Delighters, no one will bother to use your product for more than two seconds. Always cover the Basics, then look at Performance. Delighters finish last.
If you enjoyed learning more about how we work with the KANO model as we build MVPs in software, make sure you keep an eye out for future articles. We will continue to drop papers every month that describe what sets us apart and makes us stand out as a partner and collaborator when you look to develop or refine your software products.
Lachezar Blagoev is the Head of Product Management at Lab08. His responsibilities include defining product roadmaps, managing backlog, and coordinating development efforts in order to ensure that milestones maximize the value we bring to all of our customers
He is acting as the link between customers and business by representing the user’s perspective
Lachezar has made essential decisions regarding all aspects of a product strategy including but not limited to UX, technical approach, business purpose, and compliance with regulations
Be sure to follow us on social media to receive updates about other similar content!
We use cookies on our website to give you the most relevant experience by remembering your preferences and repeat visits. If you continue to use this site, we will assume you agree with it. Read More
This website uses cookies to improve your experience while you navigate through the website. Out of these cookies, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may have an effect on your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.