Grey-box testing, a software testing methodology where testers possess limited knowledge of the internal system's structure, necessitates strategic selection of functionalities for focused evaluation. Traditional approaches often rely on a combination of expert knowledge, historical data, and intuition, which can be subjective and potentially miss crucial testing areas. This paper proposes a novel machine learning (ML)-powered framework that analyze user data and system logs to recommend potential areas of interest for grey-box testing. The framework utilizes various ML algorithms to extract valuable insights from user behaviour patterns, system anomalies, and historical data on user engagement and past defects. By integrating these insights with historical defect rates and the potential severity of issues, the framework generates a prioritized list of functionalities to be targeted during testing. Additionally, it incorporates a recommendation engine to present testers with the prioritized list and relevant insights from the ML models. This data-driven approach aims to enhance software quality and tester efficiency by guiding grey-box testing efforts towards functionalities with a higher likelihood of harboring critical defects
Grey-box testing, Machine learning, User behavior analytics, Anomaly detection, Test case prioritization, Software quality
Image enhancement is crucial for applications like medical imaging, surveillance, and photography. This study explores the effectiveness of various machine learning algorithms, including convolutional neural networks (CNNs) and generative adversarial networks (GANs), for enhancing colour images. Method: We propose an end-to-end architecture that learns to enhance images while maintaining colour fidelity, contrast, and sharpness. Our experiments demonstrate that deep learning-based methods outperform traditional techniques such as histogram equalisation and Retinex. We also investigate transfer learning and fine-tuning strategies to adapt pre-trained models to specific domains. Result: The results highlight the potential of machine learning in improving colour image quality and offer insights into future research directions. The paper explores the advancement of adaptive image enhancement systems by integrating genetic programming and machine learning methodologies. It introduces a versatile image enhancement pipeline that amalgamates various image processing filters, machine learning techniques, and evolutionary algorithms to optimise image quality metrics effectively.
In conclusion, the paper also discusses the development of an adaptive image enhancement system using genetic programming and machine learning techniques. It proposes a generic image enhancement pipeline that combines image processing filters, machine learning methods, and evolutionary approaches to optimise image quality metrics.
Adaptive Image Enhancement, Machine Learning, Image Processing Filters, Automated Algorithms, Subjective Quality Assessment
In a competitive business environment, creating an efficient workplace is crucial. Integrating digital technology into employee attendance systems is vital due to its significant impact on workforce efficiency and regulatory compliance. This study aims to formulate a predictive model for analyzing attendance patterns within a facial recognition-based attendance system, leveraging methodologies rooted in machine learning (ML) and deep learning (DL) paradigms. To enhance predictive precision, this research integrates regression and classification models derived from ML theory with DL techniques. Utilized models include Random Forest, XGBoost, SVM, KNN, and Neural Network. Assessment of model effectiveness involves the evaluation of four metrics: accuracy, precision, recall, and the F1 score. Data collection relies on a facial recognition-based attendance system, trained, and tested within the Google Colab environment using Python. Findings reveal Random Forest and XGBoost as the most precise predictors of timeliness or tardiness among employees, considering age range and other pertinent factors, achieving an accuracy rate of 99%. Random Forest marginally outperforms XGBoost in both accuracy and F1-score by 0.01%. This study is notable for its incorporation of attendance system data with ML and DL methodologies to predict attendance patterns based on age and diverse parameters, consequently enhancing decision-making processes and performance management.
pilot study; ESIB; innovative organizational culture; Kuwait; validation
The study focuses on assessing segmentation models tailored for detecting drivable areas on Indian roads. Given the critical importance of accurate detection for tasks like autonomous navigation and road maintenance, the authors tackle the challenges posed by India’s complex road conditions. These conditions include erratic traffic patterns, inadequate road mark- ings, and diverse road surfaces. The authors evaluate cutting- edge segmentation models—Mask2former, Cascade Mask-RCNN, Point REND, and YOLACT—using a binary segmentation dataset from the Indian Driving Dataset. Their comparison relies on the mean intersection over union (IoU) metric. Through thor- ough experimentation, the authors analyze each model’s perfor- mance in segmenting drivable areas accurately. Additionally, they examine the impact of basic data augmentation techniques on model performance. Their findings provide practical insights for selecting the most suitable segmentation model for deployment on Indian roads. This research contributes to the progress of autonomous driving technology and enhances road safety in the region.
Drivable Area Detection, Segmentation Models, Indian Driving Dataset
With the advancement of technology and widespread use of the internet, the concept of big data, which we often hear about, has emerged. Transforming big data, briefly defined as a stack of unstructured data, into meaningful information and revealing hidden patterns can be achieved by different methods. The use of this group of methods and algorithms, called data mining, with artificial intelligence and statistics, enables more understandable, meaningful and effective decisions to be made. This helps to optimize profits by reducing costs and increasing performance. In this study; By Borusan Cat, Caterpillar Inc. Using oil analysis data obtained from construction machines, a model has been developed that will enable early malfunction detection of machines and identification of vehicle maintenance needs. With the developed system, the oil analysis data of the vehicles are used to predict the number of working hours in which the vehicle will break down, using machine learning methods, in advance. Then, forecast data is integrated into decision mechanisms in business processes, and finally, the information obtained is reported using data visualization technologies and made traceable through summary data. The system developed in this personalized product-focused study, which is the subject of Industry 4.0, can be easily adapted to the operation of different machines. In this way, it will be easier to track and control the vehicles, and it will be possible to detect any malfunctions that may occur in the vehicles without stopping the flow of the process. By contributing to the extension of the life of the machines used with the proposed approach, eliminating the costs of purchasing a new machine, which can be quite costly, or purchasing repairs and spare parts due to possible damage, will provide significant returns to companies in terms of cost and time. Another important contribution of the system will undoubtedly be environmental sustainability.
Machine Failure, Machine Learning, Decision Tree Algorithm, Oil analysis, Environmental Sustainability
The efficacy of the blinded learning method in enhancing children's enjoyment has been empirically demonstrated through the introduction of various concepts and knowledge. Nevertheless, while considering preschool education in Malaysia, blended learning integration is a new and unfamiliar approach by educators to bring 21st century learning into the classroom. Therefore, the purpose of this paper is to discuss preschool educators' perceptions of utilizing the Project-based Learning (PBL) approach and explore the importance of the PBL approach to children's development. This preliminary study was professionally conducted in a virtual setting, utilizing a sample of nine proficient preschool educators who participated in semi-structured interviews. Based on the thematic analysis findings of the study, a significant proportion of the samples expressed agreement about the utilization of PBL in conjunction with blended learning for science education due to its impactful approach in this contemporary education. Therefore, the integration of these elements presents a viable opportunity to enhance the educational standards for preschool children in the future, while also serving as a standard for enhancing Malaysia's educational system. As an implication, through these preliminary findings, it will serve as a guideline for future researchers to develop suitable and interesting learning modules for preschool children.
Machine Failure, Machine Learning, Decision Tree Algorithm, Oil analysis, Environmental Sustainability
The purpose of this article is to explore the impact and effectiveness of electronic educational technology in training human resources for the river tourism industry in Ho Chi Minh City, while also proposing solutions to improve and sustainably develop the sector. The article utilizes qualitative analysis methods through interviews, surveys of expert opinions and tourism staff, along with the analysis of secondary data from previous reports and studies. The research results indicate positive outcomes in the application of electronic educational technology, particularly in enhancing skills and knowledge for employees, improving service quality, and management effectiveness. The article contributes to proposing practical strategies, based on evidence from research, to enhance the effectiveness of electronic educational technology in the river tourism industry, contributing to sustainable economic and tourism Development.
Electronic educational technology, human resource training, river tourism, Ho Chi Minh City
The increasing demand for cloud computing services has raised concerns about its environmental impact due to high energy consumption. This project explores green computing strategies in cloud data centers to reduce environmental impact while maintaining quality of service. It begins with a comprehensive literature review, focusing on power consumption patterns, hardware performance, virtualization, and workload optimization. The study evaluates energy-efficient technologies, including dynamic resource allocation, power management policies, energy-aware load- balancing Green Algorithms, and advanced cooling. Through simulations and case studies, it measures their impact on energy reduction and operational costs. The study addresses challenges and trade-offs, offering recommendations for sustainable cloud solutions in various scenarios. The results contribute to the field of green cloud computing, promoting an eco-friendly approach to meet computing demands while upholding service standards. we investigate the optimization of data center efficiency through advanced algorithms, leveraging state-of-the-art simulation tools such as OpenDC to analyze and enhance performance metrics.
Green Cloud Computing, Data Centers, Energy Efficiency, Sustainability
Image enhancement is crucial for applications like medical imaging, surveillance, and photography. This study explores the effectiveness of various machine learning algorithms, including convolutional neural networks (CNNs) and generative adversarial networks (GANs), for enhancing colour images.
We propose an end-to-end architecture that learns to enhance images while maintaining colour fidelity, contrast, and sharpness. Our experiments demonstrate that deep learning-based methods outperform traditional techniques such as histogram equalisation and Retinex. We also investigate transfer learning and fine-tuning strategies to adapt pre-trained models to specific domains.
The results highlight the potential of machine learning in improving colour image quality and offer insights into future research directions. The paper explores the advancement of adaptive image enhancement systems by integrating genetic programming and machine learning methodologies. It introduces a versatile image enhancement pipeline that amalgamates various image processing filters, machine learning techniques, and evolutionary algorithms to optimise image quality metrics effectively.
In conclusion, the paper also discusses the development of an adaptive image enhancement system using genetic programming and machine learning techniques. It proposes a generic image enhancement pipeline that combines image processing filters, machine learning methods, and evolutionary approaches to optimise image quality metrics.
Adaptive Image Enhancement, Machine Learning, Image Processing Filters, Automated Algorithms, Subjective Quality Assessment.
The Internet of Things (IoT) is an expanding network of interconnected devices that is exposed to a growing range of cyber security threats. The integration of AI-powered solutions presents a promising avenue for enhancing anomaly detection and classification. This study delves into the development of a comprehensive methodology leveraging machine learning and deep learning techniques. Utilizing the BoTNeTIoT-L01 dataset, meticulously curated from IoT devices, the research focuses on data gathering, preprocessing, and exploratory data analysis to unearth underlying patterns and anomalies within network traffic data. Subsequently, a suite of machine learning models, including Logistic Regression, LightGBM (Light Gradient-Boosting Machine), and Decision Tree, along with a deep learning model optimized with the Adam optimizer, is employed to detect and classify anomalies effectively. The comparative analysis underscores the superior performance of advanced models such as LightGBM and Decision Tree, showcasing their efficacy in accurately identifying security threats within IoT environments. The study also addresses pertinent technical challenges, ethical considerations, and future directions, emphasizing the imperative for responsible deployment and ongoing innovation in AI-powered IoT security solutions
Animal models play a crucial role in the development of new radiopharmaceuticals in nuclear medicine. To calculate the absorbed dose in small animal internal dosimetry, an accurate and widely available database of S-values is necessary. This study aims to provide a dataset for calculating S-values for commonly used radionuclides, based on the mouse phantom and using the Monte Carlo simulation code GATE for all simulations. For benchmarcking, a simulation was first conducted using the Digimouse phantom and a Tc-99m source, and the results obtained were compared with published results. Subsequently, the phantom was used to calculate the S-values for eleven radionuclides used in nuclear medicine for diagnostic and therapeutic purposes. Finally, the phantom (26.9 g) was resized to simulate two mouse geometries: 19.6 g and 35.9 g while maintaining voxel size. Similarly, datasets were generated to evaluate variations in S-values for the 20 organs. The results were compared to published Digimouse data, which showed good agreement. This study systematically evaluated S-values for eleven radionuclides in three mouse geometries and examined the impact of organ mass variations on calculated S-values. The methodology for calculating S-values in organ-based voxelized phantoms is detailed in this study. Moreover, it provides a comprehensive database for internal dosimetry in mice for radiopharmaceuticals used in small animal PET and SPECT studies.
The COVID-19 pandemic has heightened the urgency for swift and precise diagnostic tools to curb the virus's spread. In this context, artificial intelligence (AI) has emerged as a formidable ally, offering unparalleled capabilities in analysing medical images. This study delved into the application of AI for early detection of COVID-19 by conducting a comprehensive analysis of chest X-ray images. Integrating AI into diagnostic processes has revolutionised medical practices by enhancing accuracy and operational efficiency. Machine learning algorithms play a pivotal role in aiding radiologists to detect subtle anomalies and early indicators of diseases across various imaging techniques such as X-rays, MRIs, and CT scans. These AI-driven tools empower healthcare professionals to diagnose patients swiftly and accurately, ultimately leading to better treatment outcomes. This research highlights the indispensable contribution of artificial intelligence in the global battle against COVID- 19. By scrutinising chest X-ray images, AI-powered diagnostic solutions can significantly improve precision and operational workflow, enabling healthcare providers to diagnose patients promptly and accurately. Our study employed an untrained model which is going to be trained with the image data of the 63,580 patients with a TensorFlow backend, achieving a remarkable 91.54% accuracy in identifying COVID-19 cases using real-time data. While science and AI continue to evolve, our research underscores the promising potential of AI as a dependable diagnostic tool in combating this unprecedented pandemic.
Artificial intelligence, early detection, COVID-19, chest X-ray, image analysis, machine learning, healthcare, pandemic, diagnosis, deep learning, medical imaging, diagnostic tools, and public health.