Computer Science, Information Technology and Telecommunications
Permanent URI for this collection
Browse
Browsing Computer Science, Information Technology and Telecommunications by Issue Date
Now showing 1 - 20 of 131
Results Per Page
Sort Options
- ItemA three-step strategy for generalization of three-dimensional buildings modelled in city geographic markup language(Universiti Teknologi Malaysia, 2013) Baig, Siddique UllahFor a better visual impression, three-dimensional (3D) information systems and landscape architectures need photo-realistic visualization of detailed 3D datasets. But easy accessibility with efficient rendering becomes difficult due to the detailed data associated with 3D objects. Therefore, different applications demand different levels of detail (LoD). A single generalization method cannot be applied to remove or preserve different pieces of building information on a certain LoD. Additionally, different generalization strategies produce different results for generalized models. Therefore, the aim of this thesis is to contribute the state-of-the-art in 3D generalization methodologies. This thesis proposes a 3D generalization framework based on a three-step (projection, generalization and reconstruction) strategy to generate less-detailed and more abstract representation of buildings modelled in the City Geography Markup Language (CityGML). The proposed strategy focuses specifically on simplification and aggregation of building footprints based on point-reduction, edge-removal and small circle strategies. Furthermore, vertex reduction method for simplification of complex shapes of building footprints is one of the contributions to the scientific field of 3D Geographic Information System (GIS). Experiments and results of the thesis show that 3D generalization based on the CityGML generalization specifications can avoid removal of important features of a building and fulfill the demands of task specific applications. Furthermore, mostly, data reduction is directly proportional to the length of edges as threshold value. However, the data volume of the generalized models is 10.5% for 4 meters and 30.62% for 6 meters threshold values. About 37.65% of data is reduced after generalization at LoD1 CityGML model as compared to 30.18% at LoD2. Furthermore, 3.31% boundary of building footprints of Putrajaya at 5 meters threshold value is observed as eliminated despite removing 52% smaller edges. The authenticity of generalized models is evaluated based on a comparison of similarity between original and generalized boundaries of building footprints. The proposed generalization strategy could be extended to generalize a group of buildings and maintain topological relationship among generalized LoDs
- ItemAplikasi teknik remote sensing bagi terbitan maklumat hasilan air di Semenanjung Malaysia(Universiti Teknologi Malaysia, 2014-03) Ali, Mohamad IdrisSatellite remote sensing techniques have found wide applications in hydrology including water-yield determination. This however requires the localization to area-of-interest that are influenced by the local climate and biophysical factors. This study focussed to develop a method for determining the water-yield information through full satellite-based data for Peninsular Malaysia from the public domain sources, for a period of 10 years (July 2000 - June 2010). The specific objectives were to investigate on: (i) derivation of information on monthly rainfall from Tropical Rainfall Measuring Mission Multisatellite Precipitation Analysis (TMPA) satellite data; (ii) derivation of monthly Actual- Evapotranspiration (AET) from Moderate Resolution Imaging Spectroradiometer (MODIS) satellite with Normalized Differential Vegetation Index (NDVI) data product; (iii) derivation of water yield from fully satellite-based information using water balance analysis; and (iv) water yield variation, with respect to changes of corresponding land cover and land use. Results, indicated good correlation between monthly rainfall TMPA with the corresponding rain gauge records (r2=0.71: p<0.001, n=1337) with accuracy (RMSE) of +83 mm (n=2308). The TMPAcalibrated annual averaged rainfall for the entire study area is 2357mm, which is - 5.3% compared with independent studies undertaken by an international consultant appointed by the government. The bio-physical parameters based on MODIS used NDVI as an indicator of AET to represent the land use, reported good match-up (r2=0.55: p<0.001, n=1664) with accuracy (RMSE) of +15 mm (n=864). The NDVIcalibrated annual averaged AET throughout the study area was determined at 1153mm, which is -9.9% compared with the same independent research report. Annual averaged water-yield for the entire study area is 1204mm, with -0.5% and 1.6% variations when compared to the two independent studies, the same independent research report and, Drainage and Irrigation Department respectively. But at state level, the estimated rainfall, AET and water-yield varies with larger magnitudes. Analysis at selected basin level, the annual water-yield is determined at 1393mm, in access of 9.5% compared to the independent studies water flowrate, with a standard deviation of 22%. The regression analysis between water-yield and land use cover changes, clearly indicated strong relationship (r2=0:51, p<0.0001; n=151), and independent accuracy (RMSE) of 8.3% (n=154). The main findings in this study, especially the devised techniques indeed have contributed significantly as an alternative method for the determination of water-yield in Peninsular Malaysia based on fully satellite-driven data. The devised method could be accustomized to other areas through localised calibration approach thus, could serve as a guideline for the relevant authorities to have accurate and comprehensive water-yield information
- ItemMalay statistical parametric speech synthesis with intelligibility improvement using artificial intelligence(Universiti Teknologi Malaysia, 2015) Lau, Chee YongSpeech synthesis is important nowadays and could be a great aid in various applications. So it is important to build a simple, reliable, light-weight, ease of use speech synthesizer. However, conventional speech synthesizers require tedious human efforts to prepare high quality recorded database, and the intelligibility of synthetic speech may decrease due to the appearance of polyphone (character with more than 1 pronunciation) because the speech synthesizer may not contain the definition of the polyphones. Moreover, the ready speech synthesizers in market are mostly built in Unit Selection method, which is large in database size and relying on Malay linguist knowledge. In this study, statistical parametric speech synthesis method has been adopted using lab speech and free speech data harvested online. The intelligibility improvement has been achieved using Active Learning and Feedforward Neural Network with Back-Propagation. The amount of training data used remained the same throughout this study. The result was evaluated using perception test. The listening test showed that the intelligibility of synthetic speech has been improved about 20%- 30% using the artificial intelligence technique. Volunteers were invited to take part in Active Learning experiment. The result showed no controversy between the result done by volunteers and the correct answer. In conclusion, a light-weight Malay speech synthesizer has been created without relying on Malay linguist knowledge. Using free source as training data can ease the human effort in preparing training database and using artificial intelligence technique can improve the intelligibility of synthetic speech under the same amount of training data used
- ItemMultistage artificial neural network in structural damage detection(Universiti Teknologi Malaysia, 2015) Goh, Lyn DeeThis study addressed two main current issues in the area of vibration-based damage detection. The first issue was the development of a pragmatic method for damage detection through the use of a limited number of measurements. A full set of measurements was required to establish the reliable result, especially when mode shape and frequency were used as indicators for damage detection. However, this condition is usually difficult to achieve in real-life applications. Hence, in this study, a multistage artificial neural network (ANN) was employed to predict the unmeasured data at all the unmeasured point locations to obtain full measurement before proceeding to damage detection. The accuracy and efficiency of the proposed method for damage detection was investigated. Furthermore, the sensitivity of the number of measurement points in the proposed method was also investigated through a parametric study. The second issue was the integration of the uncertainties into the proposed multistage ANN. The existence of uncertainties is inevitable in practical applications because of modelling and measurement errors. These uncertainties were incorporated into the multistage ANN through a probabilistic approach. The results were in the means of the probability of damage existence, which were computed using the Rosenblueth’s point-estimate method. The results of this study evidenced that the multistage ANN was capable of predicting the unmeasured data at the unmeasured point locations, and subsequently, was successful in predicting the damage locations and severities. The incorporation of uncertainties into the multistage ANN further improved the proposed method. The results were supported through the demonstration of numerical examples and an experimental example of a prestressed concrete panel. It is concluded that the proposed method has great potential to overcome the issue of using a limited number of sensors in the vibrationbased damaged detection field.
- ItemModified incremental single sink shortest path algorithm for three-dimensional city model(Universiti Teknologi Malaysia, 2015-03) Musliman, Ivin AmriSuccessful network-constrained navigation depends mainly on accurate geometry and semantics of a network and visualization technique for optimal navigation routes. The current navigation is primarily implemented under the framework of Geographic Information System, a two-dimensional (2D) environment which lacks effective and comprehensive consideration of multi-dimensional and dynamic navigation information along with a good visual landmark map. However this can be addressed by means of navigable surfaces on a true three-dimensional (3D) geometric model. Therefore this study has developed a prototype of an outdoor navigation application based on a practical 3D city model environment that combines two impedance factors. In the prototype, the first factor is a new technique to calculate 3D shortest path routes that support the dynamic changes of information on road networks by using a modified Dijkstra Incremental Single Sink Shortest Path algorithm. The algorithm maintains a given property P on a graph subject to dynamic changes, such as edge insertions, edge deletions or edge weight updates. Furthermore, it processes queries on property P quickly, and performs update operations faster than recomputing from beginning, as carried out by standard algorithm. Following that, the second factor automatically generates an informative map for outdoor navigation that utilizes the focus 3D map using a visual landmark dominance enhancer based on a dominance function. The prototype was then tested and evaluated with other algorithms. Results of the evaluation showed that prototype is able to compute a large city test data in less than twelve seconds with algorithm complexity . Besides that, the use of dominance function to automatically generate saliency landmarks along the shortest 3D network has improved the quality of map presentation. Thus, users also would able to recognize real world objects from the 3D model and use these prominent landmarks as navigational aids. In conclusion, the prototype has shown that the proposed impedance factors applied within the 3D city models are able to perform a true 3D navigation in comparison to existing 2D environment which only provides 2D calculation
- ItemMalay articulation system for early screening diagnostic using hidden markov model and genetic algorithm(Universiti Teknologi Malaysia, 2016) Mazenan, Mohd. NizamSpeech recognition is an important technology and can be used as a great aid for individuals with sight or hearing disabilities today. There are extensive research interest and development in this area for over the past decades. However, the prospect in Malaysia regarding the usage and exposure is still immature even though there is demand from the medical and healthcare sector. The aim of this research is to assess the quality and the impact of using computerized method for early screening of speech articulation disorder among Malaysian such as the omission, substitution, addition and distortion in their speech. In this study, the statistical probabilistic approach using Hidden Markov Model (HMM) has been adopted with newly designed Malay corpus for articulation disorder case following the SAMPA and IPA guidelines. Improvement is made at the front-end processing for feature vector selection by applying the silence region calibration algorithm for start and end point detection. The classifier had also been modified significantly by incorporating Viterbi search with Genetic Algorithm (GA) to obtain high accuracy in recognition result and for lexical unit classification. The results were evaluated by following National Institute of Standards and Technology (NIST) benchmarking. Based on the test, it shows that the recognition accuracy has been improved by 30% to 40% using Genetic Algorithm technique compared with conventional technique. A new corpus had been built with verification and justification from the medical expert in this study. In conclusion, computerized method for early screening can ease human effort in tackling speech disorders and the proposed Genetic Algorithm technique has been proven to improve the recognition performance in terms of search and classification task
- ItemEnhancement of three-dimensional geospatial web services using compression techniques(Universiti Teknologi Malaysia, 2016) Siew, Chengxi BernadTwo-dimensional (2D) Spatial Data Infrastructure (SDI) has been widely applied in various areas such as city planning, disaster management, urban navigation and urban modelling. However, three-dimensional (3D) SDI has not seen much progress: current examples of such an initiative are Berlin-3D and Heidelberg-3D, which are not fully developed. Use of City Geography Markup Language (CityGML) decreases data transfer efficiency and raises data storage issues in application such as analysis of the 3D Geographic Information System (GIS) that requires both geometries and semantics information. The efficiency determines the way spatial data being handled in SDI. For instance, small devices used for examining 3D urban models in a 3D immovable property tax appraisal exercise. CityGML data transmission for analysis purposes is impractical due to inefficiencies in terms of file size and bandwidth consumption. Currently, there is no efficient handling of 3D spatial data for either geometric or semantics information. This research investigates the existing 3D SDI use cases and frameworks, as well as XML and general compression techniques. The research also examines methods to improve efficiency by employing the proposed design with schema-awareness attached to an encoder within web services, and then proposing a generic solution for SDI development. Furthermore, the research proposes a coupling layer of encoder and decoder within web services or on the web service to client side, which is a compression and decompression tool for CityGML data transaction within 3D SDI. The implementation and analysis of this work shows that the algorithm produced a 15 per cent smaller file size for the lossless and 20 per cent to 30 per cent for the near-lossless option in comparison with the state-of-the-art Lempel-Ziv-Markov (LZMA) algorithm alone. This encoder also examines entropy of the data input and shows that the encoding process significantly reduces the bandwidth to between seven per cent to nine per cent of the original data size. The chaining of the encoder and decoder enables binary data transactions within web services while maintaining the benefit of data interoperability. The future of the research includes integrating image compression for 3D object texture and handling of 3D spatial data in small devices. The custom application network protocol could be further examined for the handling of data transactions for 3D spatial data standards in OGC web services (OWS)
- ItemA structure of waqf land declaration and registration system in Malaysia(Universiti Teknologi Malaysia, 2017) Ghazali, Noor AzimahUnder Islamic Law, waqf is special and differs from other properties and ownership because of the element of perpetuity, inalienability and irrevocability as soon as the waqif makes a declaration. Under Islamic law, the land or any property declared by individuals to become waqf is considered binding and valid as soon as the donor applies the five pillars (rukn) of waqf declaration, which are sighah, waqif, mauwquf, mawquf alaihi and nazir. However, under the Malaysian Land Law, waqf properties need to be registered at the Land Office to ensure the State Islamic Religious Council (SIRC) has indefeasible ownership under National Land Code (NLC), 1965. Even though, there exists a registration system of waqf land under the Malaysia Legal System, the system is lengthy, rather disorganized and can cause the declaration of a waqif to be defeasible and not binding, hence this risks the loss of the waqf land. This research has examined the existing structure for waqf land declaration and registration in order to improve its system within the scope of the Malaysian Legal System while being compliant to the Islamic Law. The research methodology adopted is the Content Analyses through System Theory Approach and semi-structured interviews with identified experts. The collected data were analyzed qualitatively. From the findings the researcher has identified that the declaration of waqif is defeasible and not binding because the waqf land is not free from encumbrances within the existing structure of waqf land declaration and registration system. Therefore, the research has developed a New Improved Structure of Waqf Land Declaration and Registration in Malaysia suggesting the declaration of waqif shall be valid, binding and indefeasible in future. In conclusion, the new proposed structure shall make the process of waqf land declaration and registration easier, faster, and more efficient hence; secure the waqf property from risk to be lost
- ItemPrediction of fracture dip using artificial neural networks(Universiti Teknologi Malaysia, 2017) Alizadeh, MostafaFracture characterization and fracture dip prediction can provide the desirable information about the fractured reservoirs. Fractured reservoirs are complicated and recent technology sometimes takes time and cost to provide all the desired information about these types of reservoirs. Core recovery has hardly been well in a highly fractured zone, hence, fracture dip measured from core sample is often not specific. Data prediction technology using Artificial Neural Networks (ANNs) can be very useful in these cases. The data related to undrilled depth can be predicted in order to achieve a better drilling operation, or maybe sometimes a group of data is missed then the missed data can be predicted using the other data. Consequently, this study was conducted to introduce the application of ANNs for fracture dip data prediction in fracture characterization technology. ANNs are among the best available tools to generate linear and nonlinear models and they are computational devices consisting of groups of highly interconnected processing elements called neurons, inspired by the scientists' interpretation of the architecture and functioning of the human brain. A feed forward Back Propagation Neural Network was run to predict the fractures dip angle for the third well using the image logs data of other two wells nearby. The predicted fracture dip data was compared with the fracture dip data from image logs of the third well to verify the usefulness of the ANNs. According to the obtained results, it is concluded that the ANN can be used successfully for modeling fracture dip data of the three studied wells. High correlation coefficients and low prediction errors obtained confirm the good predictive ability of ANN model, which the correlation coefficients of training and test sets for the ANN model were 0.95 and 0.91, respectively. Significantly, a non-linear approach based on ANNs allows to improve the performance of the fracture characterization technology
- ItemHybrid peak to average power ratio reduction in orthogonal frequency division multiplexing system(Universiti Teknologi Malaysia, 2017) Jaber, Ali YasirMulti-carrier systems based on Orthogonal Frequency Division Multiplexing (OFDM) is a widely-used modulation in wireless communication systems because it enables high throughput data transfer and is robust against frequency selective fading caused by the multipath wireless channel. Nevertheless, OFDM suffers from disadvantages such as high Peak-to-Average Power Ratio (PAPR) and high sensitivity to Carrier Frequency Offset (CFO) which leads to a loss of subcarrier orthogonality and severe system degradation. Thus, a suitable reduction technique should be used in OFDM system to mitigate these drawbacks. Mitigation of the impacts of PAPR and Inter-Carrier Interference (ICI) due to CFO at the OFDM transmitter is the main target of this work. In this work, PAPR and ICI reduction methods are proposed at the OFDM transmitter. Clipping Peaks Amplifying Bottoms (CPAB) method is developed to reduce PAPR, where the negative peaks of the clipped OFDM signal are amplified. However, to reduce further PAPR level, a combination of Partial Transmit Sequence (PTS) with Cascade CPAB (PTS-CCPAB) is proposed. To improve BER performance, a Carrier Frequency Offset (CFO) compensation method is added to the hybrid PTS-CCPAB. The proposed work was conducted in MATLAB simulator using the parameters of Wireless Access Vehicular Environment (WAVE) IEEE 802.11p standard. The hybrid PTS-CCPAB/CFO introduced a PAPR Reduction Gain (RG) of 39% compared to the conventional system. Also, system performance at BER =10-4 improved by 12% and 5% over Additive White Gaussian Channel (AWGN) and Rayleigh channels respectively compared to the conventional system. Overall results show that the proposed work is a suitable solution to mitigate the loss of subcarrier orthogonality and system degradation by improving both PAPR and BER performances. The proposed work can be used in most multicarrier wireless communication system.
- ItemAdaptive anomaly based fraud detection model for handling concept drift in short-term profile(Universiti Teknologi Malaysia, 2018) Alabdeen, Aisha Abdallah ZainFraud is a cybercrime where the purpose is to take money by illegal means. Fraud results in significant losses to organizations, companies and government agencies. Detecting fraud accurately will have an impact on reducing such loss, for instance by using anomaly detection, which relies on behavioural modelling methods. The anomaly based Fraud Detection System (FDS) model aims to detect and recognize fraudulent activities or anomalies as they enter a system and report them accordingly. Many anomaly based FDSs have been proposed in the literature. However, current anomaly based FDS models have low accuracy, high false alarms and delayed detection due to the drifted behaviour over time (concept drift issue), developing behavioural patterns of customers, hidden indicators, and the large dimensionality. The main purpose of this research is to design and develop an adaptive anomaly FDS model based on concept drift detection technique using shortterm aggregation profile to improve fraud detection accuracy and support early fraud detection. To achieve this purpose, two main phases are involved: the first phase is the data pre-processing phase and the second phase is the fraud and concept drift detection phase. The data pre-processing phase contains two stages; firstly, deriving features and profile building and; secondly, the feature selection stage. The first stage in the pre-processing phase is to support early detection by using a combination of derived features and features derived from literature. A rank-search feature selection stage is a hybrid approach which consists of two steps; Support Vector Machines Recursive Feature Elimination (SVM-RFE) Rank method and Greedy Stepwise (GS) Search method. A feature selection stage is used to improve fraud detection accuracy by selecting optimum features of user behaviour. In the second phase of the proposed adaptive FDS model, the fraud and drift detection phase, an effective online streaming approach based on an incremental classifier is adopted to accuratelydiscriminate fraudulent from normal data. In the concept drift detection phase, the trigger based approach is used for adaptive learning, and an adaptive training window is used to manage training data. The Statistical Process Control (SPC) technique is used as a drift detector to identify the sudden and gradual drift in the users’ behaviour. The Call Details Records (CDR) dataset containing Subscriber Identity Module (SIM) Box fraud is used to test and evaluate the proposed model. The proposed adaptive Incremental Learning Strategy and Concept Drift Detection Technique (FDS-ILS-CDDT) model integrated with the rank-search feature selection approach improves the detection accuracy of the SIM Box fraud containing the concept drifts. The average detection accuracy on a daily basis for DATA-CP (continuous pattern) saw an increase from 91.16%, 88.08% and 90.81% to 91.40% for FDS-SLS, FDS-PLS and FDS-ILS models respectively. The same growth occurred for DATA-CDP (continuous and discrete pattern), from 84.55%, 83.81% and 85.11% to 89.34% for FDS-SLS, FDS-PLS and FDS-ILS models respectively. Furthermore, FDS-ILS-CDDT obtained the best performance for false negative rate and false positive rate compared with other FDS models. The features are reduced to two and eleven of the most relevant and influential features for DATA-CP and DATA-CDP respectively.
- ItemEnhanced on-demand routing protocols in high mobility mobile Ad hoc network(Universiti Teknologi Malaysia, 2018) Al-Nahari, Abdulaziz Yahya YahyaMobile ad hoc networks (MANETs) are important wireless networks especially with the increasing growth of sensor applications and mobile devices. However, finding a path and keeping it available between the sender and receiver nodes are the main issues with MANET in high mobility scenarios. High mobility and high traffic load increase the occurrences of link failures and thus increase the delay of exploring a new path, resulting in a lower packet delivery ratio. Therefore, having a routing protocol that can cope with the high mobility in MANET remains an important necessity. Existing protocols use multiple paths for the sender nodes to improve the network reliability. However, the existing multipath protocols in high mobility suffer from short route lifetime in which the sender node uses unstable routes and needs more control packets to explore new paths. Moreover, in high traffic load, the time delay to transfer the packet from one node to another is longer and affects the packet delivery ratio and end-to-end delay performance. Therefore, this research designed a routing protocol that keeps the sender node updated with new paths within a shorter time. In addition, a reliable routing protocol that considers the route stability and delay when selecting the multiple paths was incorporated. To achieve this aim; first, a single-path Receiver-Based Ad hoc On-demand Distance Vector (RB-AODV) routing protocol was designed to decrease the time delay for exploring a new path. The receiver node role explored the paths toward the sender node when it did not receive data packets during a period of time. To have a reliable protocol, a Receiver-Based Ad hoc On-demand Multipath Distance Vector (RB-AOMDV) routing protocol was designed. The reliability was achieved by using multiple paths that increased the packet delivery ratio performance. Finally, a Route Stability and Delay Aware RB-AOMDV (RSDARB-AOMDV) protocol was designed to consider the link lifetime and delay metrics as constraints for selecting the paths to decrease the control overhead and ensure the route stability with less time delay. Simulation results showed that RSDARB-AOMDV improved the network performance in terms of end-to-end delay (44%), packet delivery ratio (16%) and normalized routing load (13%) in different mobility scenarios as compared to RB-AOMDV and MMQARP protocols. Moreover, in different traffic loads, the network performance improved in terms of end-to-end delay (39%), packet delivery ratio (11%) and normalized routing load (9%). Based on the findings of the study, the proposed routing protocol is a suitable solution to reduce the end-to-end delay and increase the packet delivery ratio in high node mobility and high traffic load.
- ItemEnhanced forensic process model in cloud environment(Universiti Teknologi Malaysia, 2018) Moussa, Ahmed NourDigital forensics practitioners have used conventional digital forensics process models to investigate cloud security incidents. Presently, there is a lack of an agreedupon or a standard process model in cloud forensics. Besides, literature has shown that there is an explicit need for consumers to collect evidence for due-diligence or legal reasons. Furthermore, a consumer oriented cloud forensics process model is yet to be found in the literature. This has created a lack of consumer preparedness for cloud incident investigations and dependency on providers for evidence collection. This research addressed these limitations by developing a cloud forensic process model. A design science research methodology was employed to develop the model. A set of requirements believed to be solutions for the challenges reported in three survey papers were applied in this research. These requirements were mapped to existing cloud forensic process models to further explicate the weaknesses. A set of process models suitable for the extraction of necessary processes was selected based on the requirements, and these selected models constituted the cloud forensic process model. The processes were consolidated and the model was proposed to alleviate dependency on the provider problem. In this model, three digital forensic types including forensic readiness, live forensics and postmortem forensic investigations were considered. Besides, a Cloud-Forensic-as-a-Service model that produces evidence trusted by both consumers and providers through a conflict resolution protocol was also designed. To evaluate the utility and usability of the model, a plausible case scenario was investigated. For validation purposes, the cloud forensic process model together with its implementation in the case scenario and set of requirements were presented to a group of experts for evaluation. Effectiveness of the requirements was rated positive by the experts. The findings of the research indicated that the model can be used for cloud investigation and is rated easy to be used and adopted by consumers.
- ItemA decision support model for demolition waste management(Universiti Teknologi Malaysia, 2018) Rakshanifar, MansoorehDemolition waste management is the process of managing, collecting, handling and disposing of waste in demolition projects. Significant effects on resource preservation, environment, and public health, and safety are the main concerns associated with demolition waste. Hence, lack of a sound decision making system for demolition waste management can negatively affect construction and demolition industry. Therefore, this study aimed to develop an integrated tool in order to assist decision making for demolition waste management. To achieve this target, overall review on the whole life cycle of demolition waste was conducted. In depth literature review and reliable interviews with experts have led to generation of a complete description of factors affecting waste management during demolition projects. Critical factors of demolition waste management were identified by considering risk assessment approach integrated with the Delphi method analysis. Next, critical factors affecting demolition waste management and different waste paths were assessed based on a consensus opinion from the experts’ panel. Analysis of the important data from the experts meeting session was conducted by using Analytic Network Process (ANP) Benefit, Opportunity, Cost and Risk (BOCR) model. ANP BOCR and Rating model have been used to rank the critical factors and demolition waste paths. To evaluate the developed model, the research used three case studies which were The Garden Premium Parking, Masjid AsySakirin and Putra Bus Station. The functionality of the model was evaluated by four evaluators. Conclusively, the result confirmed that the model satisfied 74.5% of the expectations. The developed model namely referred as Demolition Waste Management Model (DWMM) will enable the decision makers in a demolition project to systematically and semi-quantitatively identify, analyze and evaluate waste management factors. DWMM acts as information source that can be used by demolition contractors to identify and evaluate demolition waste related factors to be incorporated into the project design.
- ItemE-commerce implementation process framework for business-to-customer Malaysian small and medium enterprises(Universiti Teknologi Malaysia, 2018) Paris, Deborah LibuMany small and medium sized enterprises (SMEs) have recently decided to implement electronic commerce (e-Commerce). This requires them to successfully implement e-Commerce systems, since an inefficient implementation may lead to endangerment of the company's survival. However, despite the extensive benefits of e-Commerce, there is still a lack of e-Commerce implementations among SMEs in Malaysia. Therefore, this research aims to develop an e-Commerce Implementation Process Framework for Business-to-Customer (B2C) Malaysian SMEs. A collection of the actual experiences of SME e-Commerce champions who were involved in the implementation of e-Commerce, can facilitate the identification of the activities and the primary determinants for inclusion in the implementation process. This research adopted a positivist qualitative research approach, using case studies of Malaysian SMEs in the fashion and apparel sector, who have implemented B2C e-Commerce. In-depth interviews of e-Commerce champions were conducted over the course of three and six interviews for the pilot and primary case studies, respectively. The analysis of the data collected was divided into four phases. The first phase involved both within and cross-case analysis of the pilot cases. The second and third phases involved within and cross-case analysis of primary cases. The fourth phase involved the verification of the framework with four experts in the related field. Thematic analysis method and NVivo software were employed for data analysis. The findings from this research consist of a set of implementation activities and their main determinants featuring the B2C e-Commerce implementation process based on Kotter’s eight-stage framework. Finally, these results provide an appropriate framework for the B2C e-Commerce implementation process for Malaysian SMEs in the fashion and apparel sector. The framework is beneficial for e-Commerce practitioners as well as researchers who have similar interests in this field.
- ItemSpiking neurons in 3D growing self-organising maps(Universiti Teknologi Malaysia, 2018) Yusob, BariahIn Kohonen's Self-Organising Maps (SOM) learning, preserving the map topology to simulate the actual input features appears to be a significant process. Misinterpretation of the training samples can lead to failure in identifying the important features that may affect the outcomes generated by the SOM model. Nonetheless, it is a challenging task as most of the real problems are composed of complex and insufficient data. Spiking Neural Network (SNN) is the third generation of Artificial Neural Network (ANN), in which information can be transferred from one neuron to another using spike, processed, and trigger response as output. This study, hence, embedded spiking neurons for SOM learning in order to enhance the learning process. The proposed method was divided into five main phases. Phase 1 investigated issues related to SOM learning algorithm, while in Phase 2; datasets were collected for analyses carried out in Phase 3, wherein neural coding scheme for data representation process was implemented in the classification task. Next, in Phase 4, the spiking SOM model was designed, developed, and evaluated using classification accuracy rate and quantisation error. The outcomes showed that the proposed model had successfully attained exceptional classification accuracy rate with low quantisation error to preserve the quality of the generated map based on original input data. Lastly, in the final phase, a Spiking 3D Growing SOM is proposed to address the surface reconstruction issue by enhancing the spiking SOM using 3D map structure in SOM algorithm with a growing grid mechanism. The application of spiking neurons to enhance the performance of SOM is relevant in this study due to its ability to spike and to send a reaction when special features are identified based on its learning of the presented datasets. The study outcomes contribute to the enhancement of SOM in learning the patterns of the datasets, as well as in proposing a better tool for data analysis.
- ItemThe gamification model for motivating energy conservation behaviour(Universiti Teknologi Malaysia, 2018) Wee, Siaw ChuiThe issue of continuous growth in energy consumption has now become the concern in the profession of facilities management, as buildings contribute to a considerable amount of energy consumption. Research shows that, energy issues are much related to human behaviour and subsequently can be improved through the promotion of better energy use behaviour. Previous researchers have suggested that university should be targeted as the starting point for energy conservation. However, the existing energy conservation behaviour fostering programs (energy saving campaigns) are unsuccessful due to lack of motivation among students. The emerging concept called Gamification, which emphasises motivating particular behaviours using game design elements in non-game context, has now been extended into environmental context. Therefore, this research aims to develop a gamification model for energy saving campaign to understand the causal relationships between game design elements and intrinsic motivation among university students on energysaving campaign. This research consists of three objectives. The first objective was to propose the game design elements and gamification model for motivating energy conservation behaviour. This was then followed by the second objective, which was to develop gamification model for motivating energy conservation behaviour; and the third objective, which was to determine the differences across the demographic profile groups on the gamification model for motivating energy conservation behaviour. The first objective was achieved through the synthesis of theories and concepts in the literature review. Whereas, the second and third objectives were achieved through questionnaire survey conducted at five research universities in Malaysia, involving 2000 respondents. The collected data were analysed using structural equation modelling with SmartPLS 2.0 software. A gamification model for energy saving campaign was developed in this research. The finding indicates that the identified nine core game design elements for implementation in energy-saving campaign were able to create intrinsic motivation among students on energy-saving campaign. Besides, significant differences could be observed for different nationality group (Malaysian and non-Malaysian) and different education level group (undergraduate and postgraduate) among the university students in Malaysia, yet no significant difference was found across gender (male and female). This study contributes to the existing knowledge of gamification by identifying the core game design elements and providing empirical evidence that gamification can motivate energy conservation behaviour
- ItemIndoor path loss modeling for fifth generation applications(Universiti Teknologi Malaysia, 2018) Majed, Mohammed BahjatThe demand for high data rate transmission for the future wireless communication technology is increasing rapidly. Due to the congestion in the current bands for cellular network, it may not be able to satisfy the user requirements. For the future cellular networks, the millimeter wave (mm-wave) bands are the promising candidate bands because of the large available bandwidth. The 28 GHz and 38 GHz bands are the strongest candidate for fifth generation (5G) cellular networks. The channel needs to be characterized based on large-scale characterization to know the channel behavior in mm-wave bands in indoor environment. The narrowband channel is characterized based on the path loss model. For the development of new 5G systems to operate in bands up to 100 GHz, there is a need for accurate radio propagation models, which are not addressed by existing channel models developed for bands below 6 GHz. This attempt was conducted through extensive measurement campaigns and by using Information and Communication Solutions (ICS) Telecom simulation tool. The measurement environments were a closed-plan scenario in two buildings that included a line-of-sight (LOS) and non-line-of-sight (NLOS) corridor, a hallway, a cubicle room, and different adjacent-rooms communication links. The main limitation of the study was the limited distance range of LOS and NLOS environments because of the building structure design. Well-known single-frequency and multi-frequency directional and omnidirectional large-scale path loss models such as close-in free space reference (CI), floating intercept (FI) and alpha-beta-gamma (ABG) models and modified model are presented in this thesis. The modified model has a correction factor for different environments and it provides physically-based and efficient estimated path loss data points for the reference distance. Directional path loss model was done in co-polarized and cross-polarized antenna orientations, while omnidirectional path loss model was done in co-polarized antenna orientation only. The ICS Telecom simulation results show very high compatibility when compared with measurement campaign results. Also, it is found that the CI model is simpler, more convenient and more accurate for path loss prediction comparing with FI and ABG models. Also, the results show that the modified large-scale path loss model has the smallest path loss exponent (PLE), n and standard deviation, s values compared to the CI model. The results suggest that the modified path loss model can provide a sound estimation of path loss prediction and act as a reference analysis for developing mm-wave for wireless communication planning in indoor environments.
- ItemUnequally spaced microstrip linear antenna arrays for fifth-generation base station(Universiti Teknologi Malaysia, 2018) Zainal, Noor Ainnie SafinaWireless technology communication has been continuously evolving towards future fifth generation (5G), whereby multi-beam, multi-frequency, and low sidelobe characteristics are required in the mobile base station. However, the low sidelobe level of conventional mobile base station antenna led to more complex of feeding network design in order to give an adequate excitation coefficients (amplitude and phase) to array elements. Thus, the current base station antennas are difficult for wide frequency use due to frequency range is limited. Subsequently in this research, an unequally spaced microstrip linear antenna arrays is proposed. The radiation pattern synthesis for low sidelobe and grating lobes suppression over wide frequency use are investigated. In the first stage, a single antenna is designed at frequency 28 GHz followed by 16 element linear arrays in order to achieve the gain requirement for mobile base station antenna. Next, the design of antenna arrays with sidelobe reduction is proposed. Three configurations of linear antenna arrays are designed, which are equally spaced array (ESA), unequally spaced array 1 (USA 1) and unequally spaced array 2 (USA 2) at frequency fo = 28 GHz, f1 = 42 GHz and f2 = 56 GHz with a similar array aperture, in order to investigate the antenna performance in wide frequency use characteristics. USA 1 and USA 2 are having different center spacing of array (dc), which are dc(USA1) = 0.6 mm and dc(USA2) = 0.5 mm, respectively. The simulation results are obtained by using High Frequency Structure Simulator (HFSS). The good results were observed, where the performance of sidelobe reduction are constant even though the frequency changes. Due to the lack of measurement facilities at higher frequency than 18 GHz, the antenna arrays are redesigned at lower frequency, which are 12 and 18 GHz. In order to achieve a wide frequency operation, a wide frequency use of ESA⇤, USA 1⇤ and USA 2⇤ feeding network (which notation ⇤ indicates that the frequency of 12 GHz is chosen as reference) are designed by using Advanced Design System (ADS). An equal line lengths (ln) with equal power ratio dividers were constructed. The sidelobe reduced from -13 dB for ESA⇤ to -19 dB for USA 2⇤. The measurement of S-parameter and radiation pattern are performed using a vector network analyzer (VNA) and anechoic chamber, respectively. The measured results were presented and a good correlation with simulations was observed. From the observation, the sidelobe level and grating lobe suppression of USA 2⇤ is reduced rather well and recommended for wide frequency band for 5G mobile base station antenna.
- ItemHybrid and dynamic static criteria models for test case prioritization of web application regression testing(Universiti Teknologi Malaysia, 2018) Nejad, Mojtaba RaeisiIn software testing domain, different techniques and approaches are used to support the process of regression testing in an effective way. The main approaches include test case minimization, test case selection, and test case prioritization. Test case prioritization techniques improve the performance of regression testing by arranging test cases in such a way that maximize fault detection could be achieved in a shorter time. However, the problems for web testing are the timing for executing test cases and the number of fault detected. The aim of this study is to increase the effectiveness of test case prioritization by proposing an approach that could detect faults earlier at a shorter execution time. This research proposed an approach comprising two models: Hybrid Static Criteria Model (HSCM) and Dynamic Weighting Static Criteria Model (DWSCM). Each model applied three criteria: most common HTTP requests in pages, length of HTTP request chains, and dependency of HTTP requests. These criteria are used to prioritize test cases for web application regression testing. The proposed HSCM utilized clustering technique to group test cases. A hybridized technique was proposed to prioritize test cases by relying on assigned test case priorities from the combination of aforementioned criteria. A dynamic weighting scheme of criteria for prioritizing test cases was used to increase fault detection rate. The findings revealed that, the models comprising enhanced of Average Percentage Fault Detection (APFD), yielded the highest APFD of 98% in DWSCM and 87% in HSCM, which have led to improve effectiveness prioritization models. The findings confirmed the ability of the proposed techniques in improving web application regression testing.