Jump to content

Distributed Social Networking Protocol

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by Ennabai (talk | contribs) at 16:52, 6 March 2025. The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Mohammad Ennab

Mohammad Ennab holds a Ph.D. in Artificial Intelligence from the University of Quebec at Chicoutimi, where his research focuses on the application of AI in medical imaging. He also holds a Ph.D. in Management Information Systems, a Master’s degree in Computer Information Systems, and a Master’s degree in Quality Management. His research interests encompass artificial intelligence, database design, programming languages, cloud computing, and cybersecurity. With extensive academic experience, Ennab has served as a lecturer in Information Technology and Cybersecurity at institutions such as City University College of Ajman (UAE) and Kensley College (Canada).

Beyond academia, he has worked as a programmer and systems analyst in Jordan’s government sector, specializing in database administration and IT security frameworks. Currently, he is an instructor in the Technology Management Program at Saskatchewan Polytechnic, where he integrates cutting-edge AI and data science methodologies into business and technology education.

Research areas

Mohammad Ennab's research focuses on the intersection of Artificial Intelligence (AI), Machine Learning, and Data Science, with applications in medical imaging, AI interpretability, cybersecurity, and business technology management. His work aims to enhance AI-driven decision-making, optimize deep learning models, and bridge the gap between AI, healthcare, and business intelligence.

1. AI and Machine Learning in Medical Imaging

Ennab specializes in the development and optimization of deep learning models for medical imaging, focusing on: AI interpretability and transparency in clinical decision-making. Pixel-level interpretability techniques, such as Grad-CAM and SHAP, to enhance the explainability of AI-driven diagnostic tools. Deep learning-based disease detection and classification, with applications in X-ray and CT imaging. Fuzzy logic and AI integration for improving diagnostic accuracy in radiology.

2. AI Interpretability and Optimization

A key focus of Ennab’s research is improving the trustworthiness and reliability of AI models by: Developing explainable AI (XAI) frameworks to increase AI transparency in healthcare and business analytics. Applying Shapley values and game theory-based models to enhance AI interpretability. Implementing optimization algorithms for robust and bias-free AI models in critical applications.

3. Cybersecurity and Cloud Computing

His work in AI-powered cybersecurity and cloud security frameworks includes: Leveraging machine learning for intrusion detection and anomaly detection. Designing secure cloud computing architectures for business and healthcare applications. Exploring blockchain and AI-based encryption models for enhancing cybersecurity in cloud environments.

4. Data Science and Business Technology Management (BTM)

Ennab applies AI and predictive analytics to optimize decision-making in business and finance, focusing on: Big data analytics and business intelligence using Power BI, SQL, and predictive modeling. AI-driven risk assessment models for business forecasting and financial decision-making. Data-driven decision support systems to enhance organizational efficiency and strategic planning.

5. AI in Bioinformatics and Computational Healthcare

His ongoing research explores the potential of AI in bioinformatics and precision medicine, including: AI-driven genomic data analysis for biomarker discovery and disease prediction. Applying machine learning algorithms to biostatistics and epidemiology. Neural network models for neurological disorder diagnosis, including Alzheimer's and Parkinson’s disease.

Selected publications

  • "Enhancing Pneumonia Diagnosis through AI Interpretability: Comparative Analysis of Pixel-Level Interpretability and Grad-CAM on X-ray Imaging with VGG19," IEEE Open Journal of the Computer Society , 2025 (Accepted). "A Novel Convolutional Interpretability Model for Pixel-Level Interpretation of Medical Image Classification Through Fusion of Machine Learning and Fuzzy Logic," Smart Health, Elsevier, 2025. "Enhancing Interpretability and Accuracy of AI Models in Healthcare: A Comprehensive Review on Challenges and Future Directions," Frontiers Robotics and AI, 2024. "Advancing AI Interpretability in Medical Imaging: Novel Comparative Study of Post-hoc Explanations and PLI," MDPI MAKE, 2024. "Designing an Interpretability-Based Model to Explain the Artificial Intelligence Algorithms in Healthcare," MDPI Diagnostics, 2022. "Survey of COVID-19 Prediction Models and Their Limitations," Information Systems, 2022. "Combining Pixel-Level Interpretability with Shapley Values and Game Theory for Enhanced Diagnostic Accuracy," (In Progress).

References