The official ACM AI The Real Team Tu repository.
The repo is organized into different folder containing each team members' machine learning model and Exploratory Data, as well meeting notes from each meeting (mentor and team meetings), and scatterplots of our collective EDA.
Our app is divided into two sections: the first with a form taking user inputs that predicts the likelihood of credit card fraud using our model, and the second section containing the results of our EDA analysis and the various graphs we made for it. We listed our names in the tabs as well as references to important websites that helped us with our project.
-
Chuong Nguyen - Throughout this project, I faced many difficulties, like wrapping my head around the concept of Exploratory Data Analysis, fitting and training the computer on Linear Regression Model, and creating a workable streamlit web app. I spent many hours googling and understanding how ML works and to getting familiar with terms like hyper-parameter, EDA, pipeline, notebook, fitting, pre-processing, training, pip install, and more. Working on the project along with my school works was definitely overwhelming, but the accomplishments I achieved and the friends I made proved to be fruitful. I never regretted being part of this project. I am grateful to have Vincent Tu as my mentor along with my teammates Rohan, Rebecca, Siya, Max, and Arvin. Thank You for all of the time we spent together. Vincent, I am looking forward to being a mentee of yours again.
-
Rohan Nambimadom - During the course of this project, I faced issues with getting my python environment to work, understanding how the models worked, and figuring out how to manage my time to work on both the project and finish school work. Working with others brought me to learn how to ask for help and get the necessary help I need from both my teammates and the internet. Additionally, in getting my environemnt to work, I learned the importance of communicating my problems and, later on, communicating neccesary information such as that the version of XGBoost we had to use was version 0.90.
-
Max Weng - For me, the biggest challenge was figuring out how the python libraries worked and how ml projects are created and structured. Coming from a background of algorithmic coding competitions, I was very unfamiliar with the process of creating such a project. I had to learn the different patterns that these libraries use and how to use them to create a project. It took a lot of googling and reading through documentation to get a basic understanding of how to do each of the steps required in our project, and I had to learn how to effectively break down challenges into smaller pieces and try to solve each step as they came.
-
Arvin Zhang - While this project ended up being successful, it also brought many difficulties and learning experiences along the way. One of the biggest difficulties that I faced was fully understanding the backbone and structure of each and every model. I started off the project with hours of research on our six primary models: Decision Tree, K-Nearest Neighbors, Logistic Regression, Support Vector Machines, Random Forest, and XGBoost. Each one introduced more concepts outside of my knowledge scope at the time. Following my research, I ran into more difficulties implementing and fitting models using the pipeline we agreed on. With countless YouTube videos and google searches, I was able to overcome these difficulties and find effective solutions. Throughout this project, I am glad to have deepened my ML understanding and improved on my collaboration/communication skills.
[Insert your info]
- Vincent Tu (Advisor): LinkedIn | GitHub
- Chuong Ngyuen(Team member): LinkedIn | GitHub
- Rebecca Chen (Team member): LinkedIn | GitHub
- Siya Kamboj (Team member): LinkedIn | GitHub
- Rohan (Team member): LinkedIn | GitHub
- Arvin Zhang (Team member): LinkedIn | GitHub
- Max Weng (Team member): LinkedIn | Github