www.adamdelrosso.me Open in urlscan Pro
2600:9000:236e:1e00:2:b72:e500:93a1  Public Scan

Submitted URL: https://adamdelrosso.me/
Effective URL: https://www.adamdelrosso.me/
Submission: On February 24 via automatic, source certstream-suspicious — Scanned from DE

Form analysis 0 forms found in the DOM

Text Content

 * 
 * 
 * 
 * 
 * 
 * 
 * 
 * 
 * 
 * 

 * About
 * Projects

Adam Del Rosso
Software Engineer
|

Resume Contact Me
About Me

Hi! My name is Adam Del Rosso.

I'm a Full Stack/IoT Software Engineer at Capstone Integrated Solutions and
recently graduated from the Rochester Institute of Technology with a degree in
Software Engineering. For the past few years, I've been developing a specialty
in cloud-native systems architecture and development. The power and flexibility
of cloud-native systems continue to impress me, and I enjoy using the cloud
toolbox to build new systems or give new life to existing ones.


Outside of work you'll find me contributing to the open source projects we all
depend on, biking, hiking, golfing, and otherwise trying to stay active in life
away from the screen.

Notable Achievements
RIT Business Model Competition 2020 - 1st
Project featured on RIT magazine cover
Computer Science II Game AI - 2nd of >100
Eagle Scout

Certifications


Recent Works
All
Neurotech
AWS
Notes App
AWS


NOTES APP

AWS

I created a simple note-keeping web app called “scratch” using a stack of
serverless technology.

The heart of the backend is a serverless REST API that utilizes AWS API Gateway
and AWS Lambda. Lambda “functions” are short-living compute containers that are
started in a response to some trigger, in our case an API call. The application
also relies on Amazon Cognito for user management and authentication, Amazon
Simple Storage Service (S3) for user file uploads, and Amazon DynamoDB for user
note data. These services are managed with configuration files and use the
Serverless application framework to manage the provisioning of services.

The front end is a single page application (SPA) written in JavaScript using the
React framework. SPAs remove the need for page loads by delivering the code for
the entire application upon first navigating to the website. This enables more
responsive web applications that can rival native apps. The frontend data is
hosted on Netlify’s content distribution network (CDN) which removes the concern
of managing a dedicated web server.

Thanks to these serverless technologies, I do not need to worry about the
availability or scalability of my application, as it is in the hands of the
service providers. Although the project has little traffic, the backend is
capable of smoothly scaling up to meet the needs of millions of users without
delays or slowdowns, and without requiring any infrastructure management. As for
cost, the application could support hundreds of users within the limits of the
AWS free tier.


FUTURE PLANS

I hope to further improve the user experience of this app by utilizing service
workers, which will allow note data to be cached on user devices and displayed
immediately once opening the app. This improvement will help bring the web app
closer to a native app experience.

Furthermore, I intend to make the service end-to-end encrypted, meaning that
user data is encrypted by the client app before it is stored within the cloud
services. This way the developer (me) is completely unable to view the notes or
files stored by the user. The current version uses TLS to protect the user data
in-transit, along with encryption at rest for S3 and DynamoDB.

This project was created following the in-depth Serverless Stack guide.

View Project
Thought-Controlled Wheelchair
Neurotech


THOUGHT-CONTROLLED WHEELCHAIR

Neurotech

This project was featured on the cover of the Spring 2020 RIT University
Magazine.

I am leading a research team that is working to develop a brain-computer
interface (BCI) for the control of a wheelchair. While BCIs have applications in
many domains (cognitive therapy, gaming, sleep, and meditation, among others),
our focus is on restoring independent mobility to individuals whose needs are
not met by traditional control mechanisms. This system is particularly useful
for individuals with locked-in syndrome who have no control over voluntary
muscles and may have limited eye movement. This project runs within the
Neurotechnology Exploration Team (NXT) at RIT, which is the college’s first
undergraduate research group.

The BCI is based on a non-invasive brain activity recording technique called
electroencephalography (EEG). EEG utilizes electrodes placed on the scalp to
record the weak electrical signals produced by large populations of neurons
within the outermost layers of the brain (measured in microvolts). With
training, users can learn to modulate the activity of certain regions of the
brain through forms of imagery; our group is primarily focused on motor imagery
(imagined movement of limbs), however, we are also exploring the use of
cognitive tasks like mental arithmetic. Over time, many users are able to
modulate their brain activity to a sufficient degree so that signal processing
and machine learning algorithms may distinguish between a few imagery tasks. At
the same time, machine learning algorithms are continually adjusting to improve
the accuracy of the BCI, creating what is called a coadaptive BCI system.

EEG-BCI systems are difficult to engineer in part due to the high levels of
noise in the signal. Sources of noise include the AC electrical systems in the
room, electrical activity from muscles, movement of the eyes, blinking, and
movement of electrodes; electrode movement is made worse by the fact that a
wheelchair is a moving platform that might not be traveling on a smooth surface.
Due to these challenges and others, even the best EEG-BCI systems used in
controlled environments struggle to reach accuracy in the 90% range, and can
vary significantly between users. Combine this less than perfect accuracy with
the common reports of mental fatigue in users, and the BCI begins to look
inadequate for the task of direct wheelchair control, especially considering
that the safety of the user and those around them may be put at risk. These
limitations lead to the conclusion that an intelligent semi-autonomous
wheelchair system must be created before an EEG BCI is suitable for long term
use in an uncontrolled environment.

Because the development of this semi-autonomous system falls out of the scope of
our research group, I connected with Sidney Davis, president of the RIT
Multidisciplinary Robotics Club (MDRC). Sid decided to take on the project, and
made a proposal to work on it within the Kate Gleason College of Engineering’s
Multidisciplinary Senior Design program. I now serve as the customer for this
senior project, and Sid is leading a group of four other seniors on its
development. It is our goal to be testing functionality on an integrated BCI and
wheelchair system in the Spring of 2021.

See more about NXT at nxtr.org

Zoo Stream Animal Detector
AWS


ZOO STREAM ANIMAL DETECTOR

AWS

Header photo credit to the San Diego Zoo.

As part of the RIT class Engineering Cloud Software Systems, teams were asked to
propose and develop a project that showcases the capabilities of the AWS Cloud.
Our team chose to develop a system that notifies a set of subscribers when an
animal becomes visible in a Zoo livestream. With our system, rather than
occasionally checking live streams and hoping to see something interesting,
subscribers will know exactly when their favorite animals are out and about.
Although not limited to this specific exhibit, our proof-of-concept was
developed using the polar bear exhibit from the San Diego Zoo.


ARCHITECTURE

I led the design and development of the system for our team. The system is
primarily enabled by the Rekognition CustomLabels service, which is a managed
machine learning image recognition offering. CustomLabels accepts a dataset of
labeled images from which it trains a machine learning model to (hopefully) a
high degree of accuracy. The trained model may then be deployed into a managed
inference service capable of labeling multiple images per second. We use
SageMaker Ground Truth to develop a set of labeled training images using the
provided web application and private worker groups. The “workers” in this group
log in to the Ground Truth application and label images of the exhibit as either
containing a polar bear or not. In a real deployment of this system, we imagine
Zookeepers or other Zoo personnel labeling images from live streams on an
occasional basis. Finally, our team used Python code in a docker container
deployed on Fargate with ECS to read from the stream, call Rekognition to label
stream frames, and send notifications to subscribers when appropriate. SNS was
used to send subscriber notifications in this proof-of-concept system, however,
a real deployment would use a managed email service like Amazon SES.



View larger image


SYSTEM DEMO