Skip to content

Latest commit

 

History

History
98 lines (94 loc) · 11.2 KB

README1.md

File metadata and controls

98 lines (94 loc) · 11.2 KB

Habib Boloorchi Tabrizi • [email protected] • Linkedin.com/in/Habib-Boloorchi • github.com/Habib-Boloorchi Ph.D. candidate in Computer Science with over seven years of experience in machine vision, robotics, and data science. Specialized in developing sophisticated algorithms for autonomous navigation and human-robot interaction. Proven track record of leading and deploying innovative technological solutions in real-world environments, leveraging expertise in Python, C++, and machine learning. Passionate about applying cutting-edge computer vision techniques to enhance robotic functionalities and human experiences. EDUCATION Oklahoma State University - Stillwater. Oklahoma Graduation: May 2024 Ph.D. in Computer Science
Oklahoma State University - Stillwater. Oklahoma May 2022 Big Data Analysis Certificate University of Science & Culture, Tehran, Iran February 2014
Bachelor of Software Engineering
Skills Programming & Scripting: Python, Java, C++, JavaScript, TypeScript, Dart, Swift, Kotlin, Scala, MATLAB Frameworks & Libraries: PyTorch, TensorFlow, Keras, Flutter, Firebase, FireStore, NodeJS, Django, ROS, Unity Tools & Platforms: Google Cloud Platform (GCP), SolidWorks, Linux, Docker, Kubernetes, ROS/ROS2, Knime, Proteus, OpenCV Specialized Technologies: Machine Vision, Computer Vision, Deep Learning, NLP, Multi-Agent Systems, IoT, Single Board Computers, Data Visualization, Agile Methodologies, Sensor Integration, Embedded Systems, Autonomous Systems Soft Skills: Leadership, Team Coordination, Project Management, Research & Development, Strategic Communication EXPERIENCE Department of Computer Science ILead Laboratory at Oklahoma State University May 2023-August 2023 Graduate Research Assistant • Designed and developed an explainable architecture for interpreting pie-charts and generating descriptions in natural language, enhancing machine understanding. • Developed algorithms for enhancing machine interpretation of visual data for autonomous robotic actions. Led the integration of AI-driven systems for interpreting and responding to human environments, improving interaction and operational efficiency. • Led and coordinated a research team on the Protein Bert project, managed project timelines, revised and contributed to a research paper that was published based on our findings.  Skills and Languages: Machine/Computer Vision, NLP, Deep learning Architecture Design, Multi-Agents, LLM Sanborn June 2022 -August 2022 Deep Learning and Machine Vision Engineer (R&D) • Engineered depth perception algorithms for autonomous navigation systems in robotics, significantly enhancing system responsiveness and real-world application. These Contributed to Agile sprints and ensured project compatibility in the development of a SLAM system for a self-driving car.  Skills and Languages: PyTorch, TensorFlow, Keras PriceWaterhouseCoopers (PWC) July 2020 -August 2020 Data Engineer • Designed and implemented data visualization tools, utilizing machine learning to optimize data analytics processes and align project deliverables with business objectives. • Facilitated Agile communications to align project requirements with development efforts.  Skills and Languages: Tableau, Python, General Machine learning Oklahoma State University Application Center May 2020-August 2021 Mobile Application full stack developer (R&D)
• Architected and launched a cross-platform mobile application, focusing on scalable full-stack solutions for startups. • Led discussions on entrepreneurial strategies, ensuring idea feasibility and adapting project scopes based on feedback from UX/UI designers during Agile sprints.  Skills and Languages: Flutter, Dart, Firebase, FireStore, NodeJS

Baker Hughes a General Electric Company May 2019-August 2019 Machine Vision and Deep Learning Engineer (R&D) • Developed a machine vision application to analyze drill bit reliability using multi-angle imaging, enhancing predictive maintenance capabilities. • Took leadership in frontend development after a key team member's departure, maintaining team focus and morale by proactively addressing project hurdles  Skills and Languages: Flutter, Dart, Firebase, FireStore, NodeJS, Google Cloud Platform(GCP) , JavaScript, TypeScript Advanced Technology Research Center of Oklahoma State University May 2018-August 2019 Machine Vision and Robotics Engineer (R&D) • Enhanced robotic systems by integrating visual inertial odometry with multi-sensor data, improving navigation precision on platforms like Raspberry Pi and Jetson-Nano. • Developed a responsive robot using a Jetson-Nano, capable of hand movement tracking and shape recognition for educational demonstrations. • reported weekly with the Airforce Institute, providing optimized software solutions for Visual Inertial Odometry on Single Board Computers.  Skills and Languages: ROS , IOT,C++, Java, Python, Single Board Computer Programming, SolidWorks Oklahoma State University Center for Cyber-physical system August 2018-February 2019 Graduate Research Assistantship(R&D)
• Designed CAD models and conducted simulations to explore lunar habitat architectures for NASA's space exploration. • Created an interactive VR platform to enhance learning for autistic children, focusing on the solar system. • Engaged with NASA engineers and psychologists to ensure software applicability and educational effectiveness.  Skills and Languages: SolidWorks, Unity, C# JavaScript Mechatronic Research Lab (Qazvin University MRL-SPL) August 2015-May 2016 Cognitive Robotics Research and Development • Implemented a vision self-calibration method for humanoid robots, significantly improving operational accuracy. • Developed a classification system for EEG signals to enhance brain-computer interfacing, employing Matlab for extensive data analysis. • Founded and led a multidisciplinary team, instituting an agile framework to synchronize efforts across cognitive science, AI, and mechanical design  Skills and Languages: Proteus, OpenCV, C++, Neuroscience and Brian Anatomy, Matlab Department of Computer Science at Oklahoma State University August 2017-May2024 Graduate Teaching Assistant • Supported and mentored students in an array of specialized courses, enhancing their understanding and skills in Formal Language Theory (Graduate course), Theoretical Foundation of Computing, Java, C++, Optimization of Programming Languages, Database Management 2(Graduate course), Mobile App Development (IOS and Android), Data-Structure, Algorithm, and Discrete Math from August until May of each year.  Skills and Languages: C++, Java, Kotlin, Swift, Scala, Django, MySQL, PySpark, Hadoop, Knime, Flutter, Python Scholarships and Awards United States (Oklahoma State University): • Finalist in Business Plan Competition in Riata Center of Spears School of Businesses February 2024 • Finalist in Business Plan Competition in Riata Center of Spears School of Businesses February 2023 • 1st place in Cultural International Students Art Performance Competition November 2022 • 3rd place in Business Plan Competition in Riata Center of Spears School of Businesses April 2022 • Creativity Innovation and Entrepreneurship Scholarship December 2021 • 1st Innovative Idea in Automation and Transportation Competition April 2018 International: • Cognitive Science Researcher Scholarship In Qazvin Mechatronic Research Lab August 2016 • 1st place for Iran-Open International Standard Platform Robotics League Technical Competition April 2016 • Finalist for Iran-Open International Standard Platform Robotics League Soccer Competition April 2016 • Finalist Path-Finder Robot Race, Sharif University September 2014 • 4th Machine Vision for Irregular Gesture Recognition Competition, Amirkabir University September 2013 • Finalist Micro-Mouse Maze Robotic Competition, Amirkabir-Kharazmi September 2011 • Finalist Path-Finder Robot Race, University of Science and Culture September 2009

Publications ● Brain-Inspired Visual Odometry: Balancing Speed and Interpretability through a System of Systems Approach  The International Conference on Computational Science and Computational Intelligence (CSCI 2023) ● Enhancing ProteinBERT: Integrating Intrinsically Disordered Proteins for Comprehensive Proteomic Predictions  International Conference on Bioinformatics and Biomedicine (BIBM 2023) Projects Summary
Dissertation I improved the navigation of robots and autonomous vehicles by combining analytical and machine learning techniques. This includes using generative deep learning to detect anomaly in Inertial Measurement Unit (IMU) data, generate IMU data and fuse them with Visual Odometry coordination for consistent can concise spatial awareness. With the addition of explainable AI, the system boosts accuracy and adaptability in changing environments that can effectively function on Single Board Computers. ILead Lab project We created a new approach to making AI systems more understandable by using a framework that focuses on explaining AI decisions, especially for tasks like describing pie charts. It combines multiple AI agents to handle different parts of the explanation process, showcasing how this method can be applied in areas like image recognition and language translation to make AI's workings clearer. Baker Hughes a General Electric Company Our team crafted an innovative app using Flutter, incorporating computer vision and AI to analyze oil exploration drill bits. It offers 3D visualizations and reliability assessments by taking 2D images from different angles. I led the backend development on Google Cloud Platform (GCP) and was instrumental in 70% of the frontend work, blending my expertise in AI, computer vision, and software development. Entrepreneurship (Retinator) Retinator is an innovative startup developing AI-powered glasses specifically designed to empower blind individuals, enhancing their independence and safety. This technology focuses on providing 3D mapping and obstacle detection, significantly improving mobility and daily life for visually impaired users. Our offered device has a camera on the glasses and send it to mobile phone to give a description of the scene that a vision impaired person is unable to see using LLM.