© Josue N Rivera
There is nothing more exciting to me than learning new things. Every day, I attempt to observe how our world works and explore new ways to innovate the way we live.
Doctor of Philosophy | Purdue University
Program: Aeronautics and Astronautics Engineering | Concentration: Autonomy and Control
Honors, Leaderships and Awards:
Highlighted Courses:
2024
In ProgressMaster of Science | University of Massachusetts Dartmouth
Program: Computer Science
Thesis: Graph Induced Lifelong Learning through Features Similarities and Dissimilarities
Highlighted Courses:
2021
Bachelor of Science | University of Massachusetts Dartmouth
Program: Computer Science
Honors, Leaderships and Awards:
2019
Diploma | Lawrence High School (Math, Science and Technology Academy)
Honors, Leaderships and Awards: Valedictorian; L'Pin Award Recipient
2016
Graduate Research Assistant | Purdue University
August 2021 - Present
In ProgressResearch Associate (Intern) | Hewlett Packard Labs
May 2021 - Present
In ProgressGraduate Teaching Assistant | University of Massachusetts Dartmouth
January 2020 - May 2021
Research Fellow | University of Texas at Dallas
May 2019 - August 2019
Research Assistant | University of Massachusetts Dartmouth
September 2017 - May 2019
Master Thesis | Graph Induced Lifelong Learning through Features Similarities and Dissimilarities
Traditional approaches for training classical neural networks require that all possible classes that the model might encounter be sampled and presented during initial training. Such a requirement limits the domain of problems solved. For instance, building a vehicle that can traverse uncharted territories would be challenging due to the difficulty of constructing a model that can account for all unknown situations. In this thesis, we are presenting a proof-of-concept framework and technique for a novel approach to continual lifelong learning that utilizes feature similarities and dissimilarities in a given batch of data to solve never-seen-before tasks. Our approach has the advantage that it can be applied to both Euclidean data as well as graphs and can sustain notable accuracy across introductions of new classes without any retraining/rehearsal.
The heart of our technique, Lign, is the leveraging of neural network fine tuning and pruning, commonly used in transfer learning, to temporarily remove certain weights from a network that detect key features of a previously solved tasks and reuses them to understand new problems. After pruning, the neural network can be utilized as an embedder with the aid of clustering techniques to label data based on genetic learned features. Lign_MNIST (a restructured model tested on the MNIST data set) was able to demonstrate feature comparison and learning when shown unknown digits. Results were also found with other models that were tested on the CIFAR-100 and Cora data sets that provide further insights into the inner working of the technique.
2021
Book Chapter & Conference | An Educational Tool for Exploring the Pumping Lemma Property for Regular Languages
Pumping lemma has been a very difficult topic for students to understand in a theoretical
computer science course due to a lack of tool support. In this paper, we present an active
learning tool called MInimum PUmping length (MIPU) educational software to explore the
pumping lemma property for regular languages. For a given regular language, MIPU offers
three major functionalities: determining the membership of an input string, generating a
list of short strings that belong to the language, and automatically calculating the minimal
pumping length of the language. The software tool has been developed to provide educational
assistance to students to better understand the concepts of pumping lemma and minimum
pumping length and promote active learning through hands-on practice.Abstract
2020
Research Poster | A Comparison of the Reliability between Traditional Machine Learning Techniques and Deep Learning in the Classification of Breast Cancer
2019