Resources

Find the latest news & updates on AWS

Announcements
Blog

Cloudtech Has Earned AWS Advanced Tier Partner Status

We’re honored to announce that Cloudtech has officially secured AWS Advanced Tier Partner status within the Amazon Web Services (AWS) Partner Network!

Oct 10, 2024
-
8 MIN READ

We’re honored to announce that Cloudtech has officially secured AWS Advanced Tier Partner status within the Amazon Web Services (AWS) Partner Network! This significant achievement highlights our expertise in AWS cloud modernization and reinforces our commitment to delivering transformative solutions for our clients.

As an AWS Advanced Tier Partner, Cloudtech has been recognized for its exceptional capabilities in cloud data, application, and infrastructure modernization. This milestone underscores our dedication to excellence and our proven ability to leverage AWS technologies for outstanding results.

A Message from Our CEO

“Achieving AWS Advanced Tier Partner status is a pivotal moment for Cloudtech,” said Kamran Adil, CEO. “This recognition not only validates our expertise in delivering advanced cloud solutions but also reflects the hard work and dedication of our team in harnessing the power of AWS services.”

What This Means for Us

To reach Advanced Tier Partner status, Cloudtech demonstrated an in-depth understanding of AWS services and a solid track record of successful, high-quality implementations. This achievement comes with enhanced benefits, including advanced technical support, exclusive training resources, and closer collaboration with AWS sales and marketing teams.

Elevating Our Cloud Offerings

With our new status, Cloudtech is poised to enhance our cloud solutions even further. We provide a range of services, including:

  • Data Modernization
  • Application Modernization
  • Infrastructure and Resiliency Solutions

By utilizing AWS’s cutting-edge tools and services, we equip startups and enterprises with scalable, secure solutions that accelerate digital transformation and optimize operational efficiency.

We're excited to share this news right after the launch of our new website and fresh branding! These updates reflect our commitment to innovation and excellence in the ever-changing cloud landscape. Our new look truly captures our mission: to empower businesses with personalized cloud modernization solutions that drive success. We can't wait for you to explore it all!

Stay tuned as we continue to innovate and drive impactful outcomes for our diverse client portfolio.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Blogs
Blog
All

How does AWS TCO Analysis work?

Oct 18, 2022
-
8 MIN READ

Irrespective of your business size, you can’t ignore the value assessment of any product or service you plan to purchase. The right investment in the right system, process, and infrastructure is essential for success in your business. And how can you make the right financial decisions and understand whether “X” product/service is generating value or not for your business?

Total Cost of Ownership or TCO analysis is one method that can help you in this situation, especially if you are planning to analyze these costs on the cloud. Amazon’s AWS is a leading public cloud platform offering over 200 fully featured services, including AWS TCO analysis -a service to analyze the total costs of an asset or infrastructure on the cloud. It offers services to diverse customers – startups, government agencies, and the largest enterprises. Its agility, innovation, safety, and several data centers make it comprehensive and adaptable.

Read on to learn more about the TCO analysis and how AWS TCO analysis works.

What is TCO analysis?

As the name suggests, TCO estimates costs associated with purchasing, deploying, operating, and maintaining any asset. The asset could be physical or virtual products, services, or tools. The TCO analysis’s primary purpose is to assess the asset’s cost throughout its life cycle and to determine the return on investment.

Regarding the IT industry, TCO analysis consists of costs related to hardware/software acquisition, end-user expenses, training, network, servers, and communications. According to Gartner, “TCO is a comprehensive assessment of IT or other costs across enterprise boundaries over time.”

TCO analysis in Cloud

The adoption of cloud computing in business also raises the trend of TCO analysis on the cloud. You can call it, cloud TCO analysis, which performs the same job on the cloud. TCO analysis in the cloud calculated the total costs of adopting, executing, and provisioning cloud infrastructure. When you are planning to migrate to the cloud, this analysis helps you to weigh the current costs and cloud adoption costs. Not only Amazon, but other big tech giants, including Microsoft, Google, IBM, and many more, offering TCO analysis in the cloud. But, Amazon’s AWS is the number one cloud service provider to offer cloud services.

Why do businesses need AWS TCO analysis?

A TCO analysis helps to know whether there will be profit or loss.

Let’s understand it with an example showing how AWS TCO analysis helped the company increase its profit. The top OTT platform, Netflix, invested $9.6 million per month in AWS Cost in 2019, which would increase by 2023. According to this resource, it would be around $27.78 million per month. The biggest reason behind this investment is the profit, and AWS TCO analysis is helping them to know how this profit is happening. AWS helped Netflix to get a cost-effective and scalable cloud architecture horizontally. It also enabled the company to focus on its core business – video streaming services. You all know that Netflix is the favorite video streaming platform globally.

In another example, delaying the decision of TCO analysis ignorance resulted in a loss. According to this report on 5GC, TCO analysis has been done regarding the adoption of the 5G core. It has been found that postponing increases the TCO over five years. It indicates the losses occurred due to ignorance of TCO analysis.

These examples show that your business needs both TCO analysis and cloud infrastructure. A lack of TCO analysis might cause incorrect IT budget calculations or purchasing of inappropriate resources. It might result in problems like downtime and slower business operations. You can understand that the TCO analysis is a critical business operation. Its ignorance directly impacts financial decisions. Thus, know this and utilize AWS TCO analysis for your business success.

How does AWS TCO Analysis work?

AWS TCO analysis refers to calculating the direct and indirect costs associated with migrating, hosting, running, and maintaining IT infrastructure on the AWS cloud. It assesses all the costs of using AWS resources and compares the outcome to the TCO of an alternative cloud or on-premises platform.

AWS TCO Analysis is not a calculation of one resource or a one-step process. To understand how it works, you need to know the costs of your current IT infrastructure, understand cost factors, and how to optimize cloud costs when you deploy and manage scalable web applications or infrastructure hosted on-premises versus when you deploy them on the cloud.

Here are steps to help you understand how AWS TCO analysis works:

Preliminary steps – Know the current value and build a strategy

Step 1 – Evaluate your existing infrastructure/ web application cost

You must calculate and analyze the direct and indirect costs of your existing on-premise IT infrastructure. Perform the TCO analysis of this infrastructure, including various components.

  • Physical & virtual servers: They are the main pillars in developing the infrastructure
  • Storage mediums: Cost of database, disks, or other storage devices
  • Software & Applications: The analysis finds the cost of software and its constant upgrades. It also estimates the costs of acquiring licenses, subscriptions, loyalties, and vendor fees
  • Data centers: The analysis needs to check the costs of all linked equipment such as physical space, power, cooking, and racks with the data centers
  • Human Capital: Trainers, consultants, and people who run setups.
  • Networking & Security system: Find out the costs of these critical components

Don’t limit yourself to estimating only direct/indirect costs. Find out any hidden costs that might happen due to unplanned events like downtime and opportunity costs, which might be helpful in the future.

Step 2 – Build an appropriate cloud migration strategy

You must choose an appropriate AWS cloud migration strategy before calculating monthly AWS costs. Amazon offers many TCO analysis migration tools, such as CloudChomp CC Analyzer, Cloudamize, Migration Evaluators, etc., from AWS and AWS partners. It can help you to evaluate the existing environment, determine workloads, and plan the AWS migration. It provides excellent insights regarding the costs, which can help you to make quick and effective decisions for migration to AWS.

Primary step – Estimate AWS Cost

Know these cost factors

All industries have different objectives and business operations. Thus, their cost analysis differs according to AWS services, workloads, servers, or methods of purchasing other AWS resources., the cost depends on the working usage of services and resources.

Still, you must consider the following factors directly impacting your AWS costs.

  • Services you utilize: AWS offers various computing services, resources, and instances with hourly charges. It will bill you from when you launch any resource/instance until termination. You will get other options to use predetermined set costs for making reservations.
  • Data Transfer: AWS charges for aggregated outbound data transfer across services according to a set rate. AWS does not charge for inbound or inter-service data transfer within a specific region. Still, you must check data transfer costs before launching.
  • Storage: AWS charges for each GB of data storage. As per consumed storage classes, you need to understand the cost analysis. Remember that cold and hot storage options are available, but hot storage is expensive and accessible.
  • Resource consumption model: You get options to consume resources. Such as on-demand instances, reserved instances that give discounted prepay options for on-demand instances, and AWS saving plans.

Know how to use AWS Pricing Calculator

Once you analyze your compute resources and infrastructure to deploy, understand these factors, and decide on necessary AWS resources, you need to use AWS Pricing Calculator for expected cost estimation. This tool helps determine the total ownership cost. It is a web service that is freely available to end-users. It permits you to explore services according to need and estimate costs.

Look at the below image to see how this calculator works. You have to add required services, configure them by providing details, and see the generated costs.

workflow

Credit: Amazon AWS Pricing Calculator

You can easily add the prices according to a group of services or individual services. After adding to the calculator, check the following snap-shot of the configuration service (EC2 service). You have to provide all required information such as location type, operating system, instance type, memory, pricing models, storage, and many more.

AWS

Credit: Amazon AWS Pricing Calculator

The best part is that you can download and share the results for further analysis. The following image is a dummy report to know that you can estimate the monthly cost, budget, and other factors with this summary.

AWS2

Credit: Amazon AWS Pricing Calculator

Note: Check this link to know various factors for pricing assumptions.

Know how to optimize cloud costs on AWS

Calculation on AWS is not sufficient; you need to optimize your cost estimation. AWS offers various cost optimization options to manage, monitor, and optimize costs. Here are some tools you can utilize to optimize your costs on AWS:

Tool name Key Characteristics
AWS Trusted Advisor
  • Get recommendations from this tool to follow the AWS best practices to improve performance, security, & fault tolerance
  • Can help you to optimize your cloud deployment through context-driven recommendation
AWS Cost Explorer
  • Provide you with an interface to check, visualize, and manage AWS costs and usages over time
  • Features like filtering, grouping, and reporting can help you to manage costs efficiently
AWS Budgets
  • Use this tool to track your costs and improve them for better budget planning and controlling
  • You can also create custom actions that help prevent overages, inefficient resource usage, or lack of coverage
AWS Costs & Usages Report
  • Leverage this tool to track your savings, costs, and cost drivers.
  • You can easily integrate this report with an analytics report to get deep analysis
  • It can help you to learn cost anomalies and trends in your bills

Tool nameKey CharacteristicsAWS Trusted Advisor

  • Get recommendations from this tool to follow the AWS best practices to improve performance, security, & fault tolerance
  • Can help you to optimize your cloud deployment through context-driven recommendation

AWS Cost Explorer

  • Provide you with an interface to check, visualize, and manage AWS costs and usages over time
  • Features like filtering, grouping, and reporting can help you to manage costs efficiently

AWS Budgets

  • Use this tool to track your costs and improve them for better budget planning and controlling
  • You can also create custom actions that help prevent overages, inefficient resource usage, or lack of coverage

AWS Costs & Usages Report

  • Leverage this tool to track your savings, costs, and cost drivers.
  • You can easily integrate this report with an analytics report to get deep analysis
  • It can help you to learn cost anomalies and trends in your bills

How Airbnb used AWS Cost & Usage Report for AWS cost optimization

A community marketplace, Airbnb, based in San Francisco founded in 2008. The community has over 7 million accommodations and over 40,000 customers. In 2016, Airbnb decided to migrate all operations to AWS to scale their infrastructure automatically. It worked, and in just 3 years, the company grew significantly and reduced its expenses through different AWS services (Amazon EC2, Amazon S3, Amazon EMR, etc). In 2021, the company utilized the tools, AWS cost & usage report, saving plans, and actional data to optimize their AWS costs. The outcome: 27% reduced storage costs; 60% reduced Amazon Open Search Service cost.

The company has developed a customized cost and usage data tool through AWS services. It is helping them to reduce costs and deliver actional business metrics.

Final Stage: Avoid these mistakes

Often, businesses make mistakes like misconfiguration, choosing the wrong resource, etc., leading to increased costs. Check the following points to avoid mistakes:

  • Never create or set up cloud resources without using auto-scaling options or other monitoring tools. It happens during Dev/test environments mostly.
  • Take care while configuring storage resources, classes, and data types. Often, misconfiguration happens during storage tiers usage, such as Simple Storage Service (S3).
  • Avoid over-provisioned resources by properly consolidating them. You must know the concept of right-sizing to find the perfect match between instance types, sizes, and capacity requirements at a minimal cost.
  • Choose a pricing plan carefully based on your infrastructure requirements. This mistake can cost you an expensive cloud deployment.
  • Never ignore the newer technologies, as they can reduce your cloud spending and helps in increasing productivity in work.

Closing Thought

The report is proof to know that AWS helps businesses in cost savings up to 80% over the equivalent on-premises options. It lowers costs and allows companies to use savings for innovation. So, what are you waiting for, plan to migrate your on-premise IT infrastructure to the AWS cloud, calculate costs by following the steps, and optimize it by preventing typical mistakes?

Managing your IT infrastructure’s overall direct and indirect costs requires time and process. TCO analysis for a cloud migration project is a daunting job. But, AWS TCO analysis makes this complex process easy. Take advantage of this analysis and determine your cloud migration project cost.

Blogs
Blog
All

Questions to ask before planning the app modernization

Sep 28, 2022
-
8 MIN READ

According to the Market Research Future report, the application modernization services market is expected to reach USD 24.8 billion by 2030, growing at a CAGR of 16.8%. Nobel technologies and improved applications are two driving factors of this growing market size. On one side, it is growing, and on another side, it is failing also. According to this report, unclear project expectations are the biggest reason behind its failure.

That’s why app modernization is a big decision for your organization and business expansion. To avoid failure, you must prepare a list of good questions before planning the app’s modernization. Read the full article to understand a brief of app modernization, its need cum benefits, and questions which could help in designing the best app modernization strategy.

What is App modernization?

App modernization replaces or updates existing software, applications, or IT infrastructure with new applications, platforms, or frameworks. It is like an application upgrade on a period to utilize the technology trends and innovation. The primary purpose of app modernization is to improve the current legacy systems’ efficiency, security, and performance. The process encompasses not only updating or replacing but also reengineering the entire infrastructure.

Need/Benefits of App modernization

Application modernization is growing across industries. It meant it became an essential business need. Here are points which are highlighting that why you need the app modernization for your business:

  • To improve the business performance
  • To scale the IT infrastructure to work globally
  • To increase the security and protection of expensive IT assets
  • To enhance the efficiency of business processes and operations
  • To reduce the costs which happen due to the incompatibility of older systems with newer technologies

10 Questions need to consider before planning the app modernization

Before designing the add modernization strategy, you need a list of questions according to your business objective and services. Here are questions that might help you to make a proper plan for app modernization:

1. What is the age of your existing legacy business applications?

You have to understand your existing IT infrastructure and resources. How it is working and performing in the current environment. Are they creating problems or running smoothly? Are they causing downtime often? If they are too old, you need to replace everything; although if you upgrade them regularly, check which resource needs to modernize.

2. What are the organization’s current technical skills and resources?

You have to analyze the existing team and experts and understand whether they can adapt to the new infrastructure. You have to know their capabilities regarding learning new applications. In case you did modernization without analyzing your existing team’s capability, but after some time, you find that your experts are facing issues while working on the new IT environment. Thus, knowing the current technical skills and how you would train them for the transformation is good.

3. Would you be willing to conduct a Proof Of Concept (POV) to verify the platform’s functionality?

Are new system features able to solve the problems, and are they beneficial for business? You need to perform POV to check the new system’s functionality and find out how it works. POV can help you to examine the essential features and other characteristics of modernized apps.

4. Can the new system be easily modified to meet the business’s and customers’ changing needs?

Business needs and customer demands are not static. You know it, and it changes as soon as technological advancement or regulatory changes happen. It would be best if you found out that an application would be able to adopt the changes to fulfill your business requirements.

5. How have you surveyed the market and decided on the appropriate platform(s) to execute essential modernization?

You must research the market and list all vendors offering the services you seek for your application modernization. Analyze all factors before finalizing the best platform and services aligned with your objective.

6. How secure are the applications currently?

You have to find the security level of your legacy applications. Because modern apps need high levels and advanced security systems. Old security practices on modern apps might fail your project, so better to check the existing security.

7. Assess the opportunity costs and business risks associated with avoiding modernization?

If you avoid the app modernization, how many business opportunities might you lose, or how many risks might you face? If you escape them, you might face many losses. As discussed, modernization is a business priority in this futuristic technology era. So, be sure to understand its importance on time and execute it as soon as possible.

8. What type of modernization are you seeking?

You need to know the flexibility of your decision regarding app modernization. In simple words, which kind of modernization are you looking for in your business progress? Are you looking for a permanent or a system that could be altered in some years?

9. Did you consider the cloud when designing your application?

Running applications and managing the whole IT infrastructure on the cloud is a business priority. If your legacy applications are not compatible with the cloud, you must understand how you can make them cloud compatible. By doing this, you can easily migrate and modernize your applications to the cloud.

10. Determine what integrations are required to modernize the app?

With modernized applications, you must know the required integrations among hardware, software, or other IT assets. This answer will help you locate the best and ideal platform for your business process execution.

Forbes Councils Member Yasin Altaf has pointed out four factors – evaluate technical and business challenges, assess the current state of the legacy system, find out the right approach, and plan in his recent article. Besides being the leading voice in emerging enterprise technology, Infoworld has also revealed that time and proper tools are key drivers of the app modernization success in this report. In addition, giving time to develop and plan is the best way, according to 36% of IT leaders.

Thus, along with these questions, you must consider factors like time, budget, risk factors, and management constraints before planning the modern app.

Closing Thought

You research, ask questions from various resources, and analyze everything before purchasing anything!

Why?

To get the exemplary product/service!

It applies to app modernisation too. Your business needs modernized applications in the modern technology era. A questionnaire will help you plan an appropriate app modernization if you want the right service and execution. We hope the questions we have provided can help you find answers to all your questions. Interested in modernizing your legacy applications? Contact us. You can always count on our expert team for assistance.

Blogs
Blog
All

Building Modern Data Streaming Architecture on AWS

Sep 23, 2022
-
8 MIN READ

What’s new in AWS Kinesis?

Amazon Kinesis Data firehose now also supports dynamic partitioning, where it continuously groups in transit data using dynamically or statically defined data keys and delivers the data into the individual amazon S3 prefixes by key. This reduces time to insight by minutes, it also reduces the cost, and overall simplifies the architecture.

Working with Streaming Data

For working with streaming data using Apache Flink, we also have AWS kinesis data analytics service, as with Amazon Kinesis Datastream like Kinesis Data Firehose, this service is also a fully managed, serverless, Apache Flink environment to perform stateful processing with sub-second latency. It integrates with several AWS services, supports custom connectors, and has a notebook interface called KDA Studio (Kinesis Data Analytics Studio), a managed Apache Zeppelin notebook, to allow you to interact with streaming data.

Similar to Kinesis Data Analytics for Apache Flink, Amazon managed stream for Apache Kafka or MSK is a fully managed service for running highly available, event-driven, Apache Kafka applications.

Amazon MSK operates, maintains, and scales Apache Kafka clusters, provides enterprises with security features and supports Kafka connect, and also has multiple built-in AWS integrations.

Architecture for Real-Time Reporting

Here we derive insights from input data that are coming from diverse sources or generating near real-time dashboards. With the below architecture what you are seeing is, that you can stream near real-time data from source systems such as social media applications using Amazon MSK, Lambda, and Kinesis Data Firehose into Amazon S3, you can then use AWS glue for Data Processing and Load, Transform data into Amazon redshift using an AWS glue developed endpoint such as an Amazon Sagemaker Notebook. Once data is in Amazon Redshift, you can create a customer-centric business report using Amazon Quick sight.

This architecture helps in identifying an act on deviation from the forecasted data in near real-time. In the below architecture, data is collected from multiple sources using Kinesis Data Stream, it is then persisted in Amazon S3 by Kenisis Data firehose, initial data aggregation, and preparation is done using Amazon Athena and then stored in the AWS S3. Amazon Sagemaker is used to train a forecasting model and create behavioral predictions. As new data arrives it is aggregated and prepared in real-time by Kinesis Data Analytics. The results are compared to the previously generated forecast, Amazon Cloud Watch is used to store the forecast and actual value as metrics, and when actual value deviates and cloud watch alarms trigger an incident in AWS Systems Manager, Incident manager.

Real-time reporting

Architecture for Real-Time Reporting

Architecture for Monitoring Streaming Data with Machine Learning

This architecture helps in identifying an act on deviation from the forecasted data in near real-time. In the below architecture, data is collected from multiple sources using Kinesis Data Stream, it is then persisted in Amazon S3 by Kenisis Data firehose, initial data aggregation, and preparation is done using Amazon Athena and then stored in the AWS S3. Amazon Sagemaker is used to train a forecasting model and create behavioral predictions. As new data arrives it is aggregated and prepared in real-time by Kinesis Data Analytics. The results are compared to the previously generated forecast, Amazon Cloud Watch is used to store the forecast and actual value as metrics, and when actual value deviates and cloud watch alarms trigger an incident in AWS Systems Manager, Incident manager.

Monitoring streaming data

Architecture for Monitoring Streaming Data with Machine Learning

Conclusion

The key considerations, when working with AWS Streaming Services and Streaming Applications. When you need to choose a particular service or build a solution

Usage Patterns

Kinesis Data Stream is for collecting and storing data, and Kinesis Data Firehose is primarily for Loading and Transforming Data Streams into AWS Data Stores and Several Saas, endpoints. Kinesis Data Analytics essentially analyzes streaming data.

Throughput

Kinesis streams scale with shards and support up to 1Mb payloads, as mentioned earlier, you have a provisioning mode and an on-demand mode for scaling shard capacity. Kinesis firehose automatically scales to match the throughput of your data. The maximum streaming throughput a single Kinesis Data Analytics for SQL application can process is approximately 100 Mbps.

Latency

Kinesis Streams allows data delivery from producers to consumers in less than 70 milliseconds.

Ease of use and cost

All the streaming services on AWS are managed and serverless, including Amazon MSK serverless, this allows for ease of use by abstracting away the infrastructure management overhead and of course, considering the pricing model of each service for your unique use case.

Case Studies
Blog
All

PXL - Open Source Social Network Platform

Aug 25, 2022
-
8 MIN READ

Project Summary

PXL is an open source social network platform for content creators. It enables users to create public or private spaces for any use such as any particular task base space or any other. Furthermore, users can take advantage of social features such as building connections, posting projects that pique the interest of other users, adding team members, notifications, project participation, and more. They can also manage their profiles and conduct a global search. This social network tool offers an online version where anyone can experience this free tool. PXL’s user interface is very logical, and users can easily navigate through various elements.

Problem Statement

The client’s requirement was to build a full-fledged backend application that can easily integrate with their prebuilt front-end application, and he later asked us to integrate the backend with the front-end.

We had to design and create a social platform where users can showcase their inventions and gain exposure. One can post any software project, categorize them, invite team members, and also participate in other projects.

Additionally, to meet the need for significant content uploads, a solution had to be developed that could easily handle the upload of media files while still being affordable and effective.

We also had to create a real-time notification system that monitors all network activity such as accepting requests, declining requests, and being removed from one’s network.

Our Solution

  1. With thorough testing, responsive design, and increased efficiency and performance, we concentrated on completing each task as effectively as we could.
  2. Based on the client’s requirements, we used S3 bucket, RDS, EC2, and flask microservice for media files and SES for emails.
    - Amazon S3 was used for file hosting and data persistence.
    – Amazon Relational Database Service (RDS) was used for database deployment as it simplifies the creation, operation, management, and scaling of relational databases.
    – Amazon EC2 was used for code deployment because it offers a simple web service interface for scalable application deployment.
  3. We sent emails using Amazon SES because it is a simple and cost-effective way to send and receive emails using your own email addresses and domains.
  4. Django-graphQL was used for the backend, and Next.js was used for the front end. Django includes a built-in object-relational mapping layer (ORM) for interacting with application data from various relational databases.
    – GraphQL aims to automate backend APIs by providing type-strict query language and a single API Endpoint where you can query all information that you need and trigger mutations to send data to the backend.
    – Next.js offers the best server-side rendering and static website development solutions. We utilized the flask microservice to help with high content uploads since flask upload files give your application the flexibility and efficiency to manage file uploading and serving.
  5. Using Github’s automated CI/CD pipeline we have triggers for code lookup and deployment.

Technologies

Django-GraphQL, Next.js, PostgreSQL, AWS S3, EC2, SES and RDS

Success Metrics

  • All deliverables were completed on time and exceeded expectations.
  • Met all the expectations of the client and with positive feedback.
  • The client was constantly updated on the status of the project.

Case Studies
Blog
All

Mizaru- Online Platform for Specially Abled People To Get Support Services

Aug 25, 2022
-
8 MIN READ

Project Summary

Creating a Marketing website using ReactJS and AWS for the client to showcase what they do and how they do.
Feature enhancement in an existing web application where people with disabilities can request a communication facilitator or a support service provider and providers can accept a request and receive payment.

Problem Statement

The client divided the project into several MVPs.

As part of MVP-1, the client wanted to create a marketing website that is fast, secure, and allows people to understand what Mizaru is and how it can benefit them. They wanted a website that performs operations faster, is secure from the bots, and is cheaper to maintain.

MVP-2 involved enhancing the client’s existing web application, which was previously very basic. They wanted to implement features like admin dashboard management, QR code-based check-in and check-out of providers to provide service, etc.

In MVP-3 they wanted us to create a mobile application to perform the same functionality.

Our Solutions

1) We created a marketing website for the users using ReactJS. This provides us with a faster way to create and serve the application.
2) For deployment and maintenance, we used AWS. It reduced our cost and maintenance efforts.
3) For enhanced security from bots, we’ve implemented google ReCaptcha v3.
4) Once the user has a clear understanding, they are moved to a web app or a mobile App.
5) Through the web app customers (People with disability) can create a request based on their requirements (e.g. Need a communication facilitator or support service provider). Our application provides a way for people with disabilities to connect with service providers. This request will be visible to multiple service providers in the network and they can choose to accept or reject the request.
6) We integrated a payment gateway for processing the payment. Also, both customers and providers get notified of the multiple events. We created a dashboard for Admins to see the track of various requests and generate reports as per their needs.

Technologies

Express JS, React JS, Redux, AWS, GIT, Hubspot, Google Recaptcha v3

Success Metrics

  1. Created and delivered marketing website within the given timeframe.
  2. Created report generation feature for admin.
  3. Implementation of QR code based check-in and check-out of provider.
  4. Email reminders for customer and providers before service.

Case Studies
Blog
All

Enklu - Redefining Augmented Reality

Aug 25, 2022
-
8 MIN READ

Executive Summary

Enklu aims to provide an Augmented Reality (AR) runtime for UWP, WebGL, Windows Standalone, Android, and iOS. It carves a niche in the market by providing a product that is highly iterative in that it provides instant feedback to users for changes in layout, assets UI or scripts by automatically downloading new data eliminating the need to rebuild. Enklu is truly cross platform, not only does it compile flawlessly to multiple targets, it also allows for tailoring experiences to multiple platforms. Enklu employs Unity along with a C# and Node.js framework for backend to provide a web app that can help content creators create an AR VR experience. It employs React for its frontend.

Problem Statement

Most of the tech stack was deployed on azure VMs. However, they were using archaic deployment processes with a lot of manual input, coupled with poor infra planning had resulted in a high amount of downtime.
This problem was brought into sharp relief when their user base climbed tenfold. The problem was further compounded by a lack of health checks and resource monitoring. Subpar patches to this had brought the core maintenance and enhancement operations to a screeching halt.

Our Solutions

1) The first thing that we proposed to do was to move the frontend build files to S3 in order to reduce the load on the server, post which we moved on to automating the build and deployment of docker images using git actions and terraform and setting up better resource checks by employing the built-in azure triggers.
2) Next, we proposed rewriting parts of code to better handle errors and setting up node clusters with a load balancer to help reduce the load on the primary unity servers, this also helped with reducing downtime since nodes could be safely brought down without affecting the user experience during low traffic hours.

Technologies

C#, Nodejs, AWS(SQS, S3), Azure(VM and load balancer), Unity, .Net, Docker

Success Metrics

1. Reduced down time
2. Better error alerts
3. Reduced first response time (FRT) for resource hiccups


Get started on your cloud modernization journey today!

Let Cloudtech build a modern AWS infrastructure that’s right for your business.