Skip to content
View nielseni6's full-sized avatar

Highlights

  • Pro

Block or report nielseni6

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Please don't include any personal information such as legal names or email addresses. Maximum 100 characters, markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
nielseni6/README.md

Ian E. Nielsen (PhD Student)

Conducting eXplainable Artificial Intelligence (XAI) Research with an Emphasis on Computer Vision and Object Detection [LinkedIn] [Scholar]

About Me

I am a machine learning researcher and engineer with over 5 years of experience implementing and training state-of-the-art machine learning models. My particular expertise is in computer vision, with an emphasis on eXplainable Artificial Intelligence (XAI).

My research endeavors are driven by a commitment to enhancing the trustworthiness and interpretability of AI models. It is my vision to facilitate the deployment of these models in real-world scenarios. By developing visually intuitive explanations of complex machine learning models, my goal is to make AI interpretable and trustworthy enough for use in practical applications. This includes creating inherently interpretable models, and using XAI as a tool to debug and enhance models through novel architecture and training schema.

Education

  • 2020-Present Henry M. Rowan College of Engineering, Rowan University Ph.D. in Electrical and Computer Engineering
  • 2016-2020 Henry M. Rowan College of Engineering, Rowan University B.S. in Electrical and Computer Engineering

Publications (Google Scholar, ORCID)

  • [DOI] Transformers in time-series analysis: A tutorial.
    Circuits, Systems, and Signal Processing 42, no. 12 (2023): 7433-7466.
    Sabeen Ahmed, Ian E. Nielsen, Aakash Tripathi, Shamoon Siddiqui, Ravi P. Ramachandran, and Ghulam Rasool.

  • [DOI] Robust Explainability: A Tutorial on Gradient-Based Attribution Methods for Deep Neural Networks
    IEEE Signal Processing Magazine, vol. 39, no. 4, pp. 73-84, (2022).
    Ian E. Nielsen, Dimah Dera, Ghulam Rasool, Ravi P Ramachandran, Nidhal Carla Bouaynaya.

  • [DOI] EvalAttAI: A Holistic Approach to Evaluating Attribution Maps in Robust and Non-Robust Models
    IEEE Access, vol. 11, pp. 82556-82569, (2023).
    Ian E. Nielsen, R. P. Ramachandran, N. Bouaynaya, H. M. Fathallah-Shaykh and G. Rasool

  • [DOI] Targeted Background Removal Creates Interpretable Feature Visualizations
    2023 IEEE 66th International Midwest Symposium on Circuits and Systems (MWSCAS), Tempe, AZ, USA, pp. 1050-1054, (2023).
    Ian E. Nielsen, R. P. Ramachandran, N. Bouaynaya, H. M. Fathallah-Shaykh and G. Rasool

Popular repositories Loading

  1. Robust_Explainability_Experiments Robust_Explainability_Experiments Public

    Python 2

  2. yolov7_mavrc yolov7_mavrc Public

    Forked from naddeok96/yolov7_mavrc

    Implementation of paper - YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors

    Jupyter Notebook 2

  3. ADV-EMRG-TPCS-IN-CI-ML-AND-DM ADV-EMRG-TPCS-IN-CI-ML-AND-DM Public

    Jupyter Notebook 1

  4. EvalAttAI EvalAttAI Public

    Python 1

  5. U-2-Net U-2-Net Public

    Forked from xuebinqin/U-2-Net

    The code for our newly accepted paper in Pattern Recognition 2020: "U^2-Net: Going Deeper with Nested U-Structure for Salient Object Detection."

    Python

  6. ShiftSmoothedAttributions ShiftSmoothedAttributions Public

    Ian Nielsen Systems Devices and Algorithms in Bioinformatics Final Project Spring 2021

    Python