Change search
Refine search result
12 1 - 50 of 54
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Abdul Waheed, Malik
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Thörnberg, Benny
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Cheng, Xin
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Lawal, Najeem
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Imran, Muhammad
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Kjeldsberg, Per Gunnar
    NTNU.
    Generalized Architecture for a Real-time Computation of an Image Component Features on a FPGAManuscript (preprint) (Other academic)
    Abstract [en]

    This paper describes a generalized architecture for real-time component labeling and computation of image component features. Computing real-time image component features is one of the most important paradigms for modern machine vision systems. Embedded machine vision systems demand robust performance, power efficiency as well as minimum area utilization. The presented architecture can easily be extended with additional modules for parallel computation of arbitrary image component features. Hardware modules for component labeling and feature calculation run in parallel. This modularization makes the architecture suitable for design automation. Our architecture is capable of processing 390 video frames per second of size 640x480 pixels. Dynamic power consumption is 24.20mW at 86 frames per second on a Xilinx Spartran6 FPGA.

  • 2.
    Ahmad, Naeem
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Imran, Muhammad
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Khursheed, Khursheed
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Lawal, Najeem
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    O'Nils, Mattias
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Model, placement optimization and verification of a sky surveillance visual sensor network2013In: International Journal of Space-Based and Situated Computing (IJSSC), ISSN 2044-4893, E-ISSN 2044-4907, Vol. 3, no 3, p. 125-135Article in journal (Refereed)
    Abstract [en]

    A visual sensor network (VSN) is a distributed system of a large number of camera nodes, which generates two dimensional data. This paper presents a model of a VSN to track large birds, such as golden eagle, in the sky. The model optimises the placement of camera nodes in VSN. A camera node is modelled as a function of lens focal length and camera sensor. The VSN provides full coverage between two altitude limits. The model can be used to minimise the number of sensor nodes for any given camera sensor, by exploring the focal lengths that fulfils both the full coverage and minimum object size requirement. For the case of large bird surveillance, 100% coverage is achieved for relevant altitudes using 20 camera nodes per km² for the investigated camera sensors. A real VSN is designed and measurements of VSN parameters are performed. The results obtained verify the VSN model.

  • 3.
    Ahmad, Naeem
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Imran, Muhammad
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Khursheed, Khursheed
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Lawal, Najeem
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    O'Nils, Mattias
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Solution space exploration of volumetric surveillance using a general taxonomy2013In: Proceedings of SPIE - The International Society for Optical Engineering / [ed] Daniel J. Henry, 2013, p. Art. no. 871317-Conference paper (Refereed)
    Abstract [en]

    Visual surveillance systems provide real time monitoring of the events or the environment. The availability of low cost sensors and processors has increased the number of possible applications of these kinds of systems. However, designing an optimized visual surveillance system for a given application is a challenging task, which often becomes a unique design task for each system. Moreover, the choice of components for a given surveillance application out of a wide spectrum of available alternatives is not an easy job. In this paper, we propose to use a general surveillance taxonomy as a base to structure the analysis and development of surveillance systems. We demonstrate the proposed taxonomy for designing a volumetric surveillance system for monitoring the movement of eagles in wind parks aiming to avoid their collision with wind mills. The analysis of the problem is performed based on taxonomy and behavioral and implementation models are identified to formulate the solution space for the problem. Moreover, we show that there is a need for generalized volumetric optimization methods for camera deployment.

  • 4.
    Ahmad, Naeem
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Khursheed, Khursheed
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Imran, Muhammad
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Lawal, Najeem
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    O'Nils, Mattias
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Cost Optimization of a Sky Surveillance Visual Sensor Network2012In: Proceedings of SPIE - The International Society for Optical Engineering, Belgium: SPIE - International Society for Optical Engineering, 2012, p. Art. no. 84370U-Conference paper (Refereed)
    Abstract [en]

    A Visual Sensor Network (VSN) is a network of spatially distributed cameras. The primary difference between VSN and other type of sensor network is the nature and volume of information. A VSN generally consists of cameras, communication, storage and central computer, where image data from multiple cameras is processed and fused. In this paper, we use optimization techniques to reduce the cost as derived by a model of a VSN to track large birds, such as Golden Eagle, in the sky. The core idea is to divide a given monitoring range of altitudes into a number of sub-ranges of altitudes. The sub-ranges of altitudes are monitored by individual VSNs, VSN1 monitors lower range, VSN2 monitors next higher and so on, such that a minimum cost is used to monitor a given area. The VSNs may use similar or different types of cameras but different optical components, thus, forming a heterogeneous network.  We have calculated the cost required to cover a given area by considering an altitudes range as single element and also by dividing it into sub-ranges. To cover a given area with given altitudes range, with a single VSN requires 694 camera nodes in comparison to dividing this range into sub-ranges of altitudes, which requires only 96 nodes, which is 86% reduction in the cost.

  • 5.
    Ahmad, Naeem
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Khursheed, Khursheed
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Imran, Muhammad
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Lawal, Najeem
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    O'Nils, Mattias
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Modeling and Verification of a Heterogeneous Sky Surveillance Visual Sensor Network2013In: International Journal of Distributed Sensor Networks, ISSN 1550-1329, E-ISSN 1550-1477, p. Art. id. 490489-Article in journal (Refereed)
    Abstract [en]

    A visual sensor network (VSN) is a distributed system of a large number of camera nodes and has useful applications in many areas. The primary difference between a VSN and an ordinary scalar sensor network is the nature and volume of the information. In contrast to scalar sensor networks, a VSN generates two-dimensional data in the form of images. In this paper, we design a heterogeneous VSN to reduce the implementation cost required for the surveillance of a given area between two altitude limits. The VSN is designed by combining three sub-VSNs, which results in a heterogeneous VSN. Measurements are performed to verify full coverage and minimum achieved object image resolution at the lower and higher altitudes, respectively, for each sub-VSN. Verification of the sub-VSNs also verifies the full coverage of the heterogeneous VSN, between the given altitudes limits. Results show that the heterogeneous VSN is very effective to decrease the implementation cost required for the coverage of a given area. More than 70% decrease in cost is achieved by using a heterogeneous VSN to cover a given area, in comparison to homogeneous VSN. © 2013 Naeem Ahmad et al.

  • 6.
    Ahmad, Naeem
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Lawal, Najeem
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    O'Nils, Mattias
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Oelmann, Bengt
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Imran, Muhammad
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Khursheed, Khursheed
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Model and placement optimization of a sky surveillance visual sensor network2011In: Proceedings - 2011 International Conference on Broadband and Wireless Computing, Communication and Applications, BWCCA 2011, IEEE Computer Society, 2011, p. 357-362Conference paper (Refereed)
    Abstract [en]

    Visual Sensor Networks (VSNs) are networks which generate two dimensional data. The major difference between VSN and ordinary sensor network is the large amount of data. In VSN, a large number of camera nodes form a distributed system which can be deployed in many potential applications. In this paper we present a model of the physical parameters of a visual sensor network to track large birds, such as Golden Eagle, in the sky. The developed model is used to optimize the placement of the camera nodes in the VSN. A camera node is modeled as a function of its field of view, which is derived by the combination of the lens focal length and camera sensor. From the field of view and resolution of the sensor, a model for full coverage between two altitude limits has been developed. We show that the model can be used to minimize the number of sensor nodes for any given camera sensor, by exploring the focal lengths that both give full coverage and meet the minimum object size requirement. For the case of large bird surveillance we achieve 100% coverage for relevant altitudes using 20 camera nodes per km2 for the investigated camera sensors.

  • 7.
    Ahmad, Naeem
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    O'Nils, Mattias
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Lawal, Najeem
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    A taxonomy of visual surveillance systems2013Report (Other academic)
    Abstract [en]

    The increased security risk in society and the availability of low cost sensors and processors has expedited the research in surveillance systems. Visual surveillance systems provide real time monitoring of the environment. Designing an optimized surveillance system for a given application is a challenging task. Moreover, the choice of components for a given surveillance application out of a wide spectrum of available products is not an easy job.

     

    In this report, we formulate a taxonomy to ease the design and classification of surveillance systems by combining their main features. The taxonomy is based on three main models: behavioral model, implementation model, and actuation model. The behavioral model helps to understand the behavior of a surveillance problem. The model is a set of functions such as detection, positioning, identification, tracking, and content handling. The behavioral model can be used to pinpoint the functions which are necessary for a particular situation. The implementation model structures the decisions which are necessary to implement the surveillance functions, recognized by the behavioral model. It is a set of constructs such as sensor type, node connectivity and node fixture. The actuation model is responsible for taking precautionary measures when a surveillance system detects some abnormal situation.

     

    A number of surveillance systems are investigated and analyzed on the basis of developed taxonomy. The taxonomy is general enough to handle a vast range of surveillance systems. It has organized the core features of surveillance systems at one place. It may be considered an important tool when designing surveillance systems. The designers can use this tool to design surveillance systems with reduced effort, cost, and time.

  • 8.
    Alqaysi, Hiba
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Lawal, Najeem
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Fedorov, Igor
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    O'Nils, Mattias
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Evaluating Coverage Effectiveness of Multi-Camera Domes Placement for Volumetric Surveillance2017In: ICDSC 2017 Proceedings of the 11th International Conference on Distributed Smart Cameras, New York, NY, USA: Association for Computing Machinery (ACM), 2017, Vol. F132201, p. 49-54Conference paper (Refereed)
    Abstract [en]

    Multi-camera dome is composed of a number of cameras arranged to monitor a half sphere of the sky. Designing a network of multi-camera domes can be used to monitor flying activities in open large area, such as birds' activities in wind parks. In this paper, we present a method for evaluating the coverage effectiveness of the multi-camera domes placement in such areas. We used GPS trajectories of free flying birds over an area of 9 km2 to analyze coverage effectiveness of randomly placed domes. The analysis is based on three criteria namely, detection, positioning and the maximum resolution captured. The developed method can be used to evaluate results of designing and optimizing dome placement algorithms for volumetric monitoring systems in order to achieve maximum coverage.

  • 9.
    Alqaysi, Hiba
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Lawal, Najeem
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Fedorov, Igor
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    O'Nils, Mattias
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Full Coverage Optimization for Multi Camera Dome Placement in Volumetric Monitoring2018In: ACM International Conference Proceeding Series, New York, NY, USA: ACM Digital Library, 2018, article id Article No. 2Conference paper (Refereed)
    Abstract [en]

    Volumetric monitoring can be challenging due to having a 3D target space and moving objects within it. Multi camera dome is proposed to provide a hemispherical coverage of the 3D space around it. This paper introduces a method that optimizes multi camera placement for full coverage in volumetric monitoring system. Camera dome placement is modeled in a volume by adapting the hexagonal packing of circles to provide full coverage at a given height, and 100% detection of flying objects within it. The coverage effectiveness of different placement configurations was assessed using an evaluation environment. The proposed placement is applicable in designing and deploying surveillance systems for remote outdoor areas, such as sky monitoring in wind farms and airport runways in order to record and analyze flying activities.

  • 10.
    Bader, Sebastian
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Krämer, Matthias
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Lawal, Najeem
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    O'Nils, Mattias
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Oelmann, Bengt
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Remote image capturing with low-cost and low-power wireless camera nodes2014In: Proceedings of IEEE Sensors, IEEE Sensors Council, 2014, p. 730-733, article id 6985103Conference paper (Refereed)
    Abstract [en]

    Wireless visual sensor networks provide featurerich information about their surrounding and can thus be used as a universal measurement tool for a great number of applications. Existing solutions, however, have mainly been focused on high sample rate applications, such as video surveillance, object detection and tracking. In this paper, we present a wireless camera node architecture that targets low sample rate applications (e.g., manual inspections and meter reading). The major design considerations are a long system lifetime, a small size and a low production cost.We present the overall architecture with its individual design choices, and evaluate the architecture with respect to its application constraints. With a typical image acquisition cost of 1.5 J for medium quality images and a quiescent power demand of only 7 uW, the evaluation results demonstrate that long operation periods of the order of years can be achieved in low sample rate scenarios.

  • 11.
    Cheng, Xin
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Thörnberg, Benny
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Malik, Waheed
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Lawal, Najeem
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Hardware centric machine vision for high precision center of gravity calculation2010In: World Academy of Science, Engineering and Technology: An International Journal of Science, Engineering and Technology, ISSN 2010-376X, E-ISSN 2070-3740, Vol. 40, p. 576-583Article in journal (Refereed)
    Abstract [en]

    We present a hardware oriented method for real-time measurements of object's position in video. The targeted application area is light spots used as references for robotic navigation. Different algorithms for dynamic thresholding are explored in combination with component labeling and Center Of Gravity (COG) for highest possible precision versus Signal-to-Noise Ratio (SNR). This method was developed with a low hardware cost in focus having only one convolution operation required for preprocessing of data.

  • 12.
    Cheng, Xin
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Thörnberg, Benny
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Malik, Waheed
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Lawal, Najeem
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Hardware Centric Machine Vision for High Precision Center of Gravity Calculation2010In: PROCEEDINGS OF WORLD ACADEMY OF SCIENCE, ENGINEERING AND TECHNOLOGY, 2010, p. 736-743Conference paper (Refereed)
    Abstract [en]

    We present a hardware oriented method for real-time measurements of object’s position in video. The targeted application area is light spots used as references for robotic navigation. Different algorithms for dynamic thresholding are explored in combination with component labeling and Center Of Gravity (COG) for highest possible precision versus Signal-to-Noise Ratio (SNR). This method was developed with a low hardware cost in focus having only one convolution operation required for preprocessing of data.

  • 13.
    Dreier, Till
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Krapohl, David
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Maneuski, Dzimitry
    School of Physics & Astronomy, University of Glasgow, Glasgow, Scotland.
    Lawal, Najeem
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Schöwerling, Jan Oliver
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design. Osnabrück University of Applied Sciences, Osnabrück, Germany.
    O'Shea, Val
    School of Physics & Astronomy, University of Glasgow, Glasgow, Scotland.
    Fröjdh, Christer
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    A USB 3.0 readout system for Timepix3 detectors with on-board processing capabilities2018In: Journal of Instrumentation, ISSN 1748-0221, E-ISSN 1748-0221, Vol. 13, article id C11017Article in journal (Refereed)
    Abstract [en]

    Timepix3 is a high-speed hybrid pixel detector consisting of a 256 x 256 pixel matrix with a maximum data rate of up to 5.12 Gbps (80 MHit/s). The ASIC is equipped with eight data channels that are data driven and zero suppressed making it suitable for particle tracking and spectral imaging.

    In this paper, we present a USB 3.0-based programmable readout system with online preprocessing capabilities. USB 3.0 is present on all modern computers and can, under real-world conditions, achieve around 320MB/s, which allows up to 40 MHit/s of raw pixel data. With on-line processing, the proposed readout system is capable of achieving higher transfer rate (approaching Timepix4) since only relevant information rather than raw data will be transmitted. The system is based on an Opal Kelly development board with a Spartan 6 FPGA providing a USB 3.0 interface between FPGA and PC via an FX3 chip. It connects to a CERN T imepix 3 chipboard with standard VHDCI connector via a custom designed mezzanine card. The firmware is structured into blocks such as detector interface, USB interface and system control and an interface for data pre-processing. On the PC side, a Qt/C++ multi-platformsoftware library is implemented to control the readout system, providing access to detector functions and handling high-speed USB 3.0 streaming of data from the detector.

    We demonstrate equalisation, calibration and data acquisition using a Cadmium Telluride sensor and optimise imaging data using simultaneous ToT (Time-over-Threshold) and ToA (Timeof- Arrival) information. The presented readout system is capable of other on-line processing such as analysis and classification of nuclear particles with current or larger FPGAs.

    The full text will be freely available from 2019-12-11 15:36
  • 14.
    Fedorov, Igor
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Lawal, Najeem
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    O'Nils, Mattias
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Alqaysi, Hiba
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Placement Strategy of Multi-Camera Volumetric Surveillance System for Activities Monitoring2017In: ICDSC 2017 Proceedings of the 11th International Conference on Distributed Smart Cameras, New York, NY, USA: Association for Computing Machinery (ACM), 2017, Vol. F132201, p. 113-118Conference paper (Refereed)
    Abstract [en]

    The design of multi-camera surveillance system comes with many advantages, for example it facilitates as understanding how flying objects act in a given volume. One possible application is for the observation interaction of birds and calculate their trajectories around wind turbines to create promising systems for preventing bird collisions with turbine blades. However, there are also challenges, such as finding the optimal node placement and camera calibration. To address these challenges we investigated a trade-off between calibration accuracy and node requirements, including resolution, modulation transfer function, field of view and angle baseline. We developed a strategy for camera placement to achieve improved coverage for golden eagle monitoring and tracking. This strategy based on the modified resolution criterion taking into account the contrast function of the camera and the estimation of the base angle between the cameras.

  • 15.
    Fedorov, Igor
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Lawal, Najeem
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Thörnberg, Benny
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Alqaysi, Hiba
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    O'Nils, Mattias
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Towards calibration of outdoor multi-camera visual monitoring system2018In: ACM International Conference Proceeding Series, New York, NY, US: ACM Digital Library, 2018, , p. 6Conference paper (Refereed)
    Abstract [en]

    This paper proposes a method for calibrating of multi-camera systems where no natural reference points exist in the surrounding environment. Monitoring the air space at wind farms is our test case. The goal is to monitor the trajectories of flying birds to prevent them from colliding with rotor blades. Our camera calibration method is based on the observation of a portable artificial reference marker made out of a pulsed light source and a navigation satellite sensor module. The reference marker can determine and communicate its position in the world coordinate system at centimeter precision using navigartion sensors. Our results showed that simultaneous detection of the same marker in several cameras having overlapping field of views allowed us to determine the markers position in 3D world coordinate space with an accuracy of 3-4 cm. These experiments were made in the volume around a wind turbine at distances from cameras to marker within a range of 70 to 90 m.

  • 16.
    Imran, Muhammad
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Ahmad, Naeem
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Khursheed, Khursheed
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Malik, Abdul Waheed
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Lawal, Najeem
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    O’Nils, Mattias
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Implementation of Wireless Vision Sensor Node With a Lightweight Bi-Level Video Coding2013In: IEEE Journal on Emerging and Selected Topics in Circuits and Systems, ISSN 2156-3357, Vol. 3, no 2, p. 198-209, article id 6508941Article in journal (Refereed)
    Abstract [en]

    Wireless vision sensor networks (WVSNs) consist ofa number of wireless vision sensor nodes (VSNs) which have limitedresources i.e., energy, memory, processing, and wireless bandwidth.The processing and communication energy requirements ofindividual VSN have been a challenge because of limited energyavailability. To meet this challenge, we have proposed and implementeda programmable and energy efficient VSN architecturewhich has lower energy requirements and has a reduced designcomplexity. In the proposed system, vision tasks are partitionedbetween the hardware implemented VSN and a server. The initialdata dominated tasks are implemented on the VSN while thecontrol dominated complex tasks are processed on a server. Thisstrategy will reduce both the processing energy consumption andthe design complexity. The communication energy consumption isreduced by implementing a lightweight bi-level video coding on theVSN. The energy consumption is measured on real hardware fordifferent applications and proposed VSN is compared against publishedsystems. The results show that, depending on the application,the energy consumption can be reduced by a factor of approximately1.5 up to 376 as compared to VSN without the bi-level videocoding. The proposed VSN offers energy efficient, generic architecturewith smaller design complexity on hardware reconfigurableplatform and offers easy adaptation for a number of applicationsas compared to published systems.

  • 17.
    Imran, Muhammad
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Ahmad, Naeem
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Khursheed, Khursheed
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    O’Nils, Mattias
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Lawal, Najeem
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Low Complexity Background Subtraction for Wireless Vision Sensor Node2013In: Proceedings - 16th Euromicro Conference on Digital System Design, DSD 2013, 2013, p. 681-688Conference paper (Refereed)
    Abstract [en]

    Wireless vision sensor nodes consist of limited resources such as energy, memory, wireless bandwidth and processing. Thus it becomes necessary to investigate lightweight vision tasks. To highlight the foreground objects, many machine vision applications depend on the background subtraction technique. Traditional background subtraction approaches employ recursive and non-recursive techniques and store the whole image in memory. This raises issues like complexity on hardware platform, energy requirements and latency. This work presents a low complexity background subtraction technique for a hardware implemented VSN. The proposed technique utilizes existing image scaling techniques for scaling down the image. The downscaled image is stored in memory of microcontroller which is already there for transmission. For subtraction operation, the background pixels are generated in real time through up scaling. The performance, and memory requirements of the system is compared for four image scaling techniques including nearest neighbor, averaging, bilinear, and bicubic. The results show that a system with lightweight scaling techniques, i.e., nearest neighbor and averaging, up to a scaling factor of 8, missed on average less than one object as compared to a system which uses a full original background image. The proposed approach will reduce the cost, design/implementation complexity and the memory requirement by a factor of up to 64.

  • 18.
    Imran, Muhammad
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Benkrid, Khaled
    School of Engineering at the University of Edinburgh,UK.
    Khursheed, Khursheed
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Ahmad, Naeem
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    O’Nils, Mattias
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Lawal, Najeem
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Analysis and Characterization of Embedded Vision Systems for Taxonomy Formulation2013In: Proceedings of SPIE - The International Society for Optical Engineering / [ed] Nasser Kehtarnavaz, Matthias F. Carlsohn,, USA: SPIE - International Society for Optical Engineering, 2013, p. Art. no. 86560J-Conference paper (Refereed)
    Abstract [en]

    The current trend in embedded vision systems is to propose bespoke solutions for specific problems as each application has different requirement and constraints. There is no widely used model or benchmark which aims to facilitate generic solutions in embedded vision systems. Providing such model is a challenging task due to the wide number of use cases, environmental factors, and available technologies. However, common characteristics can be identified to propose an abstract model. Indeed, the majority of vision applications focus on the detection, analysis and recognition of objects. These tasks can be reduced to vision functions which can be used to characterize the vision systems. In this paper, we present the results of a thorough analysis of a large number of different types of vision systems. This analysis led us to the development of a system’s taxonomy, in which a number of vision functions as well as their combination characterize embedded vision systems. To illustrate the use of this taxonomy, we have tested it against a real vision system that detects magnetic particles in a flowing liquid to predict and avoid critical machinery failure. The proposed taxonomy is evaluated by using a quantitative parameter which shows that it covers 95 percent of the investigated vision systems and its flow is ordered for 60 percent systems. This taxonomy will serve as a tool for classification and comparison of systems and will enable the researchers to propose generic and efficient solutions for same class of systems.

  • 19.
    Imran, Muhammad
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Khursheed, Khursheed
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Ahmad, Naeem
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Malik, Abdul Waheed
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    O'Nils, Mattias
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Lawal, Najeem
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Complexity Analysis of Vision Functions for implementation of Wireless Smart Cameras using System Taxonomy2012In: Proceedings of SPIE - The International Society for Optical Engineering, Belgium: SPIE - International Society for Optical Engineering, 2012, p. Art. no. 84370C-Conference paper (Refereed)
    Abstract [en]

    There are a number of challenges caused by the large amount of data and limited resources such as memory, processing capability, energy consumption and bandwidth when implementing vision systems on wireless smart cameras using embedded platforms. It is usual for research in this field to focus on the development of a specific solution for a particular problem. There is a requirement for a tool which has the ability to predict the resource requirements for the development and comparison of vision solutions in wireless smart cameras. To accelerate the development of such tool, we have used a system taxonomy, which shows that the majority of wireless smart cameras have common functions. In this paper, we have investigated the arithmetic complexity and memory requirements of vision functions by using the system taxonomy and proposed an abstract complexity model. To demonstrate the use of this model, we have analysed a number of implemented systems with this model and showed that complexity model together with system taxonomy can be used for comparison and generalization of vision solutions. Moreover, it will assist researchers/designers to predict the resource requirements for different class of vision systems in a reduced time and which will involve little effort. 

  • 20.
    Imran, Muhammad
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Khursheed, Khursheed
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Ahmad, Naeem
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    O'Nils, Mattias
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Lawal, Najeem
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Waheed, Malik A.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Architecture of Wireless Visual Sensor Node with Region of Interest Coding2012In: Proceedings - 2012 IEEE 3rd International Conference on Networked Embedded Systems for Every Application, NESEA 2012, IEEE conference proceedings, 2012, p. Art. no. 6474029-Conference paper (Refereed)
    Abstract [en]

    The challenges involved in designing a wirelessVision Sensor Node include the reduction in processing andcommunication energy consumption, in order to maximize itslifetime. This work presents an architecture for a wireless VisionSensor Node, which consumes low processing andcommunication energy. The processing energy consumption isreduced by processing lightweight vision tasks on the VSN andby partitioning the vision tasks between the wireless VisionSensor Node and the server. The communication energyconsumption is reduced with Region Of Interest coding togetherwith a suitable bi-level compression scheme. A number ofdifferent processing strategies are investigated to realize awireless Vision Sensor Node with a low energy consumption. Theinvestigation shows that the wireless Vision Sensor Node, usingRegion Of Interest coding and CCITT group4 compressiontechnique, consumes 43 percent lower processing andcommunication energy as compared to the wireless Vision SensorNode implemented without Region Of Interest coding. Theproposed wireless Vision Sensor Node can achieve a lifetime of5.4 years, with a sample period of 5 minutes by using 4 AAbatteries.

  • 21.
    Imran, Muhammad
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Khursheed, Khursheed
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Ahmad, Naeem
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    O'Nils, Mattias
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Lawal, Najeem
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Waheed, Malik A.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Complexity Analysis of Vision Functions for Comparison of Wireless Smart Cameras2014In: International Journal of Distributed Sensor Networks, ISSN 1550-1329, E-ISSN 1550-1477, p. Art. no. 710685-Article in journal (Refereed)
    Abstract [en]

    There are a number of challenges caused by the large amount of data and limited resources such as memory, processing capability, energy consumption, and bandwidth, when implementing vision systems on wireless smart cameras using embedded platforms. It is usual for research in this field to focus on the development of a specific solution for a particular problem. There is a requirement for a tool which facilitates the complexity estimation and comparison of wireless smart camera systems in order to develop efficient generic solutions. To develop such a tool, we have presented, in this paper, a complexity model by using a system taxonomy. In this model, we have investigated the arithmetic complexity and memory requirements of vision functions with the help of system taxonomy. To demonstrate the use of the proposed model, a number of actual systems are analyzed in a case study. The complexity model, together with system taxonomy, is used for the complexity estimation of vision functions and for a comparison of vision systems. After comparison, the systems are evaluated for implementation on a single generic architecture. The proposed approach will assist researchers in benchmarking and will assist in proposing efficient generic solutions for the same class of problems with reduced design and development costs.

  • 22.
    Imran, Muhammad
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Khursheed, Khursheed
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Lawal, Najeem
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    O'Nils, Mattias
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Ahmad, Naeem
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Implementation of wireless Vision Sensor Node for Characterization of Particles in Fluids2012In: IEEE transactions on circuits and systems for video technology (Print), ISSN 1051-8215, E-ISSN 1558-2205, Vol. 22, no 11, p. 1634-1643Article in journal (Refereed)
    Abstract [en]

    Wireless Vision Sensor Networks (WVSNs) have a number of wireless Vision Sensor Nodes (VSNs), often spread over a large geographical area. Each node has an image capturing unit, a battery or alternative energy source, a memory unit, a light source, a wireless link and a processing unit. The challenges associated with WVSNs include low energy consumption, low bandwidth, limited memory and processing capabilities. In order to meet these challenges, our research is focused on the exploration of energy efficient reconfigurable architectures for VSN. In this work, the design/research challenges associated with the implementation of VSN on different computational platforms such as micro-controller, FPGA and server, are explored. In relation to this, the effect on the energy consumption and the design complexity at the node, when the functionality is moved from one platform to another are analyzed. Based on the implementation of the VSN on embedded platforms, the lifetime of the VSN is predicted using the measured energy values of the platforms for different implementation strategies. The implementation results show that an architecture, where the compressed images after pixel based operation are transmitted, realize a WVSN system with low energy consumption. Moreover, the complex post processing tasks are moved to a server, with reduced constraints. 

  • 23.
    Imran, Muhammad
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Khursheed, Khursheed
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Malik, Abdul Waheed
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Ahmad, Naeem
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    O'Nils, Mattias
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Lawal, Najeem
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Thörnberg, Benny
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Architecture Exploration Based on Tasks Partitioning Between Hardware, Software and Locality for a Wireless Vision Sensor Node2012In: International Journal of Distributed Systems and Technologies, ISSN 1947-3532, E-ISSN 1947-3540, Vol. 3, no 2, p. 58-71Article in journal (Refereed)
    Abstract [en]

    Wireless Vision Sensor Networks (WVSNs) is an emerging field which consists of a number of Visual Sensor Nodes (VSNs). Compared to traditional sensor networks, WVSNs operates on two dimensional data, which requires high bandwidth and high energy consumption. In order to minimize the energy consumption, the focus is on finding energy efficient and programmable architectures for the VSN by partitioning the vision tasks among hardware (FPGA), software (Micro-controller) and locality (sensor node or server). The energy consumption, cost and design time of different processing strategies is analyzed for the implementation of VSN. Moreover, the processing energy and communication energy consumption of VSN is investigated in order to maximize the lifetime. Results show that by introducing a reconfigurable platform such as FPGA with small static power consumption and by transmitting the compressed images after pixel based tasks from the VSN results in longer battery lifetime for the VSN.

  • 24.
    Imran, Muhammad
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Khursheed, Khursheed
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    O’ Nils, Mattias
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Lawal, Najeem
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Exploration of Target Architecture for aWireless Camera Based Sensor Node2010In: 28th Norchip Conference, NORCHIP 2010, IEEE conference proceedings, 2010, p. 1-4Conference paper (Refereed)
    Abstract [en]

    The challenges associated with wireless vision sensor networks are low energy consumption, less bandwidth and limited processing capabilities. In order to meet these challenges different approaches are proposed. Research in wireless vision sensor networks has been focused on two different assumptions, first is sending all data to the central base station without local processing, second approach is based on conducting all processing locally at the sensor node and transmitting only the final results. Our research is focused on partitioning the vision processing tasks between Senor node and central base station. In this paper we have added the exploration dimension to perform some of the vision tasks such as image capturing, background subtraction, segmentation and Tiff Group4 compression on FPGA while communication on microcontroller. The remaining vision processing tasks i.e. morphology, labeling, bubble remover and classification are processed on central base station. Our results show that the introduction of FPGA for some of the visual tasks will result in a longer life time for the visual sensor node while the architecture is still programmable.

  • 25.
    Imran, Muhammad
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    O’Nils, Mattias
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Lawal, Najeem
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Demo: SRAM FPGA based Wireless Smart Camera: SENTIOF-CAM2014In: Proceedings of the International Conference on Distributed Smart Cameras, 2014, article id a41Conference paper (Refereed)
    Abstract [en]

    Wireless Sensor Networks applications with huge amount of datarequirements are attracting the utilization of high performanceembedded platforms i.e. Field Programmable Gate Arrays(FPGAs) for in-node sensor processing. However, the designcomplexity, high configuration and static energies of SRAMFPGAs impose challenges for duty cycled applications. In thisdemo, we demonstrate the functionality of SRAM FPGA basedwireless vision sensor node called SENTIOF-CAM. Thedemonstration shows that by using intelligent techniques, a lowenergy and low complexity SRAM FPGA based wireless visionsensor node can be realized for duty cycled applications.

  • 26.
    Imran, Muhammad
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    O’Nils, Mattias
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Lawal, Najeem
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Energy Driven Selection and Hardware Implementation of Bi-Level Image Compression2014In: Proceedings of the International Conference on Distributed Smart Cameras, ACM Press, 2014, article id a32Conference paper (Refereed)
    Abstract [en]

    Wireless Vision Sensor Nodes are considered to have smaller resources and are expected to have a longer lifetime based on the available limited energy. A wireless Vision Sensor Node (VSN) is often characterized to consume more energy in communication as compared to processing. The communication energy can be reduced by reducing the amount of transmission data with the help of a suitable compression scheme. This work investigates bi-level compression schemes including G4, G3, JBIG2, Rectangular, GZIP, GZIP_Pack and JPEG-LS on a hardware platform. The investigation results show that GZIP_pack, G4 and JBIG2 schemes are suitable for a hardware implemented VSN. JBIG2 offers up to a 43 percent reduction in overall energy consumption as compared to G4 and GZIP_pack for complex images. However, JBIG2 has higher resource requirement and implementation complexity. The difference in overall energy consumption is smaller for smooth images. Depending on the application requirement, the exclusion of a header can reduce the energy consumption by approximately 1 to 33 percent.

  • 27.
    Imran, Muhammad
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Shahzad, Khurram
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Ahmad, Naeem
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    O'Nils, Mattias
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Lawal, Najeem
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Oelmann, Bengt
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Energy Efficient SRAM FPGA based Wireless Vision Sensor Node: SENTIOF‐CAM2014In: IEEE transactions on circuits and systems for video technology (Print), ISSN 1051-8215, E-ISSN 1558-2205, Vol. 24, no 12, p. 2132-2143Article in journal (Refereed)
    Abstract [en]

    Many Wireless Vision Sensor Networks (WVSNs) applications are characterized to have a low duty cycling. An individual wireless Vision Senor Node (VSN) in WVSN is required to operate with limited resources i.e., processing, memory and wireless bandwidth on available limited energy. For such resource constrained VSN, this paper presents a low complexity, energy efficient and programmable VSN architecture based on a design matrix which includes partitioning of processing load between the node and a server, a low complexity background subtraction, bi-level video coding and duty cycling. The tasks partitioning and proposed background subtraction reduces the processing energy and design complexity for hardware implemented VSN. The bi-level video coding reduces the communication energy whereas the duty cycling conserves energy for lifetime maximization. The proposed VSN, referred to as SENTIOF-CAM, has been implemented on a customized single board, which includes SRAM FPGA, microcontroller, radio transceiver and a FLASH memory. The energy values are measured for different states and results are compared with existing solutions. The comparison shows that the proposed solution can offer up to 69 times energy reduction. The lifetime based on measured energy values shows that for a sample period of 5 minutes, a 3.2 years lifetime can be achieved with a battery of 37.44 kJ energy. In addition to this, the proposed solution offers generic architecture with smaller design complexity on a hardware reconfigurable platform and offers easy adaptation for a number of applications.

  • 28.
    Imran, Muhammad
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Wang, Xu
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Lawal, Najeem
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    O'Nils, Mattias
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Pre-processing Architecture for IR-Visual Smart Camera Based on Post-Processing Constraints2016Conference paper (Refereed)
    Abstract [en]

    In embedded vision systems, the efficiency of pre-processing architectures have a ripple effect on post-processing functions such as feature extraction, classification and recognition. In this work, we investigated a pre-processing architecture for smart camera system, integrating a thermal and vision sensors, by considering the constraints of post-processing. By utilizing the locality feature of the system, we performed pre-processing on the camera node by using FPGA and post-processing on the client device by using the microprocessor platform, NVIDIA Tegra. The study shows that for outdoor people surveillance applications with complex background and varying lighting conditions, the pre-processing architecture, which transmits thermal binary Region-of-Interest (ROI) images, offers better classification accuracy and smaller complexity as compared to alternative approaches.

  • 29.
    Khursheed, Khursheed
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Imran, Muhammad
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Malik, Abdul Waheed
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    O'Nils, Mattias
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Lawal, Najeem
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Thörnberg, Benny
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Exploration of tasks partitioning between hardware software and locality for a wireless camera based vision sensor node2011In: Proceedings - 6th International Symposium on Parallel Computing in Electrical Engineering, PARELEC 2011, IEEE conference proceedings, 2011, p. 127-132Conference paper (Refereed)
    Abstract [en]

    In this paper we have explored different possibilities for partitioning the tasks between hardware, software and locality for the implementation of the vision sensor node, used in wireless vision sensor network. Wireless vision sensor network is an emerging field which combines image sensor, on board computation and communication links. Compared to the traditional wireless sensor networks which operate on one dimensional data, wireless vision sensor networks operate on two dimensional data which requires higher processing power and communication bandwidth. The research focus within the field of wireless vision sensor networks have been on two different assumptions involving either sending raw data to the central base station without local processing or conducting all processing locally at the sensor node and transmitting only the final results. Our research work focus on determining an optimal point of hardware/software partitioning as well as partitioning between local and central processing, based on minimum energy consumption for vision processing operation. The lifetime of the vision sensor node is predicted by evaluating the energy requirement of the embedded platform with a combination of FPGA and micro controller for the implementation of the vision sensor node. Our results show that sending compressed images after pixel based tasks will result in a longer battery life time with reasonable hardware cost for the vision sensor node. © 2011 IEEE.

  • 30.
    Khursheed, Khursheed
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Imran, Muhammad
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    O'Nils, Mattias
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Lawal, Najeem
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Exploration of Local and Central Processing for a Wireless Camera Based Sensor Node2010In: International Conference on Signals and Electronic Systems, ICSES'10 - Conference Proceeding 2010, Article number 5595231, IEEE conference proceedings, 2010, p. 147-150Conference paper (Refereed)
    Abstract [en]

    Wireless vision sensor network is an emerging field which combines image sensor, on board computation and communication links. Compared to the traditional wireless sensor networks which operate on one dimensional data, wireless vision sensor networks operate on two dimensional data which requires both higher processing power and communication bandwidth. The research focus within the field of wireless vision sensor network has been based on two different assumptions involving either sending data to the central base station without local processing or conducting all processing locally at the sensor node and transmitting only the final results. In this paper we focus on determining an optimal point for intelligence partitioning between the sensor node and the central base station and by exploring compression methods. The lifetime of the visual sensor node is predicted by evaluating the energy consumption for different levels of intelligence partitioning at the sensor node. Our results show that sending compressed images after segmentation will result in a longer life for the sensor node.

  • 31.
    Lawal, Najeem
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Comparison of FPGA and DSP performances in neighbourhood oriented real-time video processingManuscript (Other academic)
  • 32.
    Lawal, Najeem
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Global Block RAM Allocation Algorithm for FPGA implementation of Real-Time Video Processing Systems2004Report (Other scientific)
    Abstract [en]

    In this master thesis an algorithm for the allocation of on-chip FPGA Block RAMs for the implementation of a Real-Time Video Processing Systems is presented. The effectiveness of the algorithm is shown through the implementation of realistic image processing systems. The algorithm, which is based on a heuristic, seeks the most cost effective way of allocating memory objects to the FPGA Block RAMs. The experimental results obtained show that this algorithm generates results which are close to the theoretical optimum for most design cases.

  • 33.
    Lawal, Najeem
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Memory Synthesis for FPGA Implementation of Real-Time Video Processing Systems2009Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    In this thesis, both a method and a tool to enable efficient memory synthesis for real-time video processing systems on field programmable logic array are presented. In real-time video processing system (RTVPS), a set of operations are repetitively performed on every image frame in a video stream. These operations are usually computationally intensive and, depending on the video resolution, can also be very data transfer dominated. These operations, which often require data from several consecutive frames and many rows of data within each frame, must be performed accurately and under real-time constraints as the results greatly affect the accuracy of application. Application domains of these systems include machine vision, object recognition and tracking, visual enhancement and surveillance.

    Developments in field programmable gate arrays (FPGAs) have been the motivation for choosing them as the platform for implementing RTVPS. Essential logic resources required in RTVPS operations are currently available and are optimized and embedded in modern FPGAs. One such resource is the embedded memory used for data buffering during real-time video processing. Each data buffer corresponds to a row of pixels in a video frame, which is allocated using a synthesis tool that performs the mapping of buffers to embedded memories. This approach has been investigated and proven to be inefficient. An efficient alternative employing resource sharing and allocation width pipelining will be discussed in this thesis.

    A method for the optimised use of these embedded memories and, additionally, a tool supporting automatic generation of hardware descriptions language (HDL) modules for the synthesis of the memories according to the developed method are the main focus of this thesis. This method consists of the memory architecture, allocation and addressing. The central objective of this method is the optimised use of embedded memories in the process of buffering data on-chip for an RVTPS operation. The developed software tool is an environment for generating HDL codes implementing the memory sub-components.

    The tool integrates with the Interface and Memory Modelling (IMEM) tools in such a way that the IMEM’s output - the memory requirements of a RTVPS - is imported and processed in order to generate the HDL codes. IMEM is based on the philosophy that the memory requirements of an RTVPS can be modelled and synthesized separately from the development of the core RTVPS algorithm thus freeing the designer to focus on the development of the algorithm while relying on IMEM for the implementation of memory sub-components.

  • 34.
    Lawal, Najeem
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Memory Synthesis for FPGA Implementation of Real-Time Video Processing Systems2006Licentiate thesis, comprehensive summary (Other academic)
    Abstract [en]

    In this thesis, both a method and a tool to enable efficient memory synthesis for real-time video processing systems on field programmable logic array are presented. In real-time video processing system (RTVPS), a set of operations are repetitively performed on every image frame in a video stream. These operations are usually computationally intensive and, depending on the video resolution, can also be very data transfer dominated. These operations, which often require data from several consecutive frames and many rows of data within each frame, must be performed accurately and under real-time constraints as the results greatly affect the accuracy of application. Application domains of these systems include object recognition, object tracking and surveillance.

    Developments in field programmable gate array (FPGA) have been the motivation for choosing them as the platform for implementing RTVPS. Essential logic resources required in RTVPS operation are currently available optimized and embedded in modern FPGAs. One such resource is the embedded memory used for data buffering during real-time video processing. Each data buffer corresponds to a row of pixels in a video frame, which is allocated using a synthesis tool that performs the mapping of buffers to embedded memories. This approach has been investigated and proven to be inefficient. An efficient alternative employing resource sharing and allocation width pipelining will be discussed in this thesis.

    A method for the optimal use of these embedded memories and, additionally, a tool supporting automatic generation of hardware descriptions language (HDL) codes for the synthesis of the memories according to the developed method are the main focus of this thesis. This method consists of the memory architecture, allocation and addressing. The central objective of this method is the optimal use of embedded memories in the process of buffering data on-chip for an RVTPS operation. The developed software tool is an environment for generating HDL codes implementing the memory sub-components.

    The tool integrates with the Interface and Memory Modelling (IMEM) tools in such a way that the IMEM’s output - the memory requirements of a RTVPS - is imported and processed in order to generate the HDL codes. IMEM is based on the philosophy that the memory requirements of an RTVPS can be modelled and synthesized separately from the development of the core RTVPS algorithm thus freeing the designer to focus on the development of the algorithm while relying on IMEM for the implementation of memory sub-components.

  • 35.
    Lawal, Najeem
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Lateef, Fahad
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Usman, Muhammad
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Power Consumption Measurement & Configuration Time of FPGA2015In: 2015 POWER GENERATION SYSTEMS AND RENEWABLE ENERGY TECHNOLOGIES (PGSRET-2015), 2015, p. 63-67, article id 7312250Conference paper (Refereed)
    Abstract [en]

    In this paper, we presents results concerning power consumption and configuration time for FPGA. FPGAs re-programmability, flexibility and re-configurability give rise to number of possibilities like adding more and more features, increasing lifetime duration to embedded systems. Power consumption of the peripheral devices is also meaningfully affects by Time behavior. Estimation based on average activity may not being useful for accurate power estimation of system. The configuration time of FPGA depend on configuration data width, size file, clock frequency and flash time access. We measured the total power consumption on each voltage supply and the total configuration time of Spartan-6 FPGA Atlys board using LabVIEW. Comparison had been made between estimated power value and measured power value. Hence, we believe that our experiment results will be useful to other FPGA-based embedded systems.

  • 36.
    Lawal, Najeem
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    O'Nils, Mattias
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Embedded FPGA memory requirements for real-time video processing applications2005In: 23rd NORCHIP Conference 2005, IEEE conference proceedings, 2005, p. 206-209, article id 1597025Conference paper (Refereed)
    Abstract [en]

    FPGAs show interesting properties for real-time implementation of video processing systems. An important feature is the available on-chip RAM blocks embedded on the FPGAs. This paper presents an analysis of the current and future requirements of video processing systems put on these embedded memory resources. The analysis is performed such that a set of video processing systems are allocated onto different existing and extrapolated FPGA architectures. The analysis shows that FPGAs should support multiple memory sizes to take full advantage of the architecture. These results are valuable for both designers of systems and for planning the development of new FPGA architectures

  • 37.
    Lawal, Najeem
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    O'Nils, Mattias
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Imran, Muhammad
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Design exploration of a multi-camera dome for sky monitoring2016In: ACM International Conference Proceeding Series, Association for Computing Machinery (ACM), 2016, Vol. 12-15-September-2016, p. 14-18, article id 2967419Conference paper (Refereed)
    Abstract [en]

    Sky monitoring has many applications but also many challenges to be addressed before it can be realized. Some of the challenges are cost, energy consumption and complex deployment. One way to address these challenges is to compose a camera dome by grouping cameras that monitor a half sphere of the sky. In this paper, we present a model for design exploration that investigates how characteristics of camera chips and objective lenses affect the overall cost of a node of a camera dome. The investigation showed that by accepting more cameras in a single node can result in a reduced total cost of the system. This concludes that by using suitable design and camera placement technique, a cost-effective solution can be proposed for massive open-area i.e. sky monitoring.

  • 38.
    Lawal, Najeem
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    O´Nils, Mattias
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Thörnberg, Benny
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    C++ based System Synthesis of Real-Time Video Processing Systems targeting FPGA Implementation2007In: Proceedings - 21st International Parallel and Distributed Processing Symposium, IPDPS 2007; Abstracts and CD-ROM, Long Beach, CA: IEEE conference proceedings, 2007, p. 1-7, article id 4228121Conference paper (Refereed)
    Abstract [en]

    Implementing real-time video processing systems put high requirements on computation and memory performance. FPGAs have proven to be effective implementation architecture for these systems. However, the hardware based design flow for FPGAs make the implementation task complex. The system synthesis tool presented in this paper reduces this design complexity. The synthesis is done from a SystemC based coarse grain dataflow graph that captures the video processing system. The data flow graph is optimized and mapped onto an FPGA. The results from real-life video processing systems clearly show that the presented tool produces effective implementations.

  • 39.
    Lawal, Najeem
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    O'Nils, Mattias
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Thörnberg, Benny
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    C++ based System Synthesis of Real-Time Video Processing Systems targeting FPGA Implementation2006In: Proceedings of the FPGA World Conference 2006, 2006Conference paper (Refereed)
    Abstract [en]

    Implementing real-time video processing systems put high requirements on computation and memory performance. FPGAs have shown to be an effective implementation architecture for these systems. However, the hardware based design flow for FPGAs make the implementation task complex. The system synthesis tool presented in this paper reduces this design complexity. The synthesis is done from a SystemC based coarse grain data flow graph that captures the video processing system. The data flow graph is optimized and mapped onto an FPGA. The results from real-life video processing systems clearly show that the presented tool produces effective implementations.

  • 40.
    Lawal, Najeem
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Thörnberg, Benny
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    O'Nils, Mattias
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Address Generation for FPGA RAMs for Efficient Implementation of Real-Time Video Processing Systems2005In: Proceedings - 2005 International Conference on Field Programmable Logic and Applications, FPL, IEEE conference proceedings, 2005, p. 136-141, article id 1515712Conference paper (Refereed)
    Abstract [en]

    FPGA offers the potential of being a reliable, and high-performance reconfigurable platform for the implementation of real-time video processing systems. To utilize the full processing power of FPGA for video processing applications, optimization of memory accesses and the implementation of memory architecture are important issues. This paper presents two approaches, base pointer approach and distributed pointer approach, to implement accesses to on-chip FPGA Block RAMs. A comparison of the experimental results obtained using the two approaches on realistic image processing systems design cases is presented. The results show that compared to the base pointer approach the distributed pointer approach increases the potential processing power of FPGA, as a reconfigurable platform for video processing systems.

  • 41.
    Lawal, Najeem
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Thörnberg, Benny
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    O´Nils, Mattias
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Architecture driven memory allocation for FPGA Based Real-Time Video Processing Systems2011In: Proceedings of the 2011 7th Southern Conference on Programmable Logic, SPL 2011 2011, Article number 5782639, IEEE conference proceedings, 2011, p. 143-148Conference paper (Refereed)
    Abstract [en]

    In this paper, we present an approach that uses information about the FPGA architecture to achieve optimized allocation of embedded memory in real-time video processing system. A cost function defined in terms of required memory sizes, available block- and distributed-RAM resources is used to motivate the allocation decision. This work is a high-level exploration that generates VHDL RTL modules and synthesis constraint files to specify memory allocation. Results show that the proposed approach achieves appreciable reduction in block RAM usage over previous logic to memory mapping approach at negligible increase in logic usage

  • 42.
    Lawal, Najeem
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Thörnberg, Benny
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    O'Nils, Mattias
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Power-aware automatic constraint generation for FPGA based real-time video processing systems2007In: 25th Norchip Conference, NORCHIP, New York: IEEE conference proceedings, 2007, p. 124-128Conference paper (Refereed)
    Abstract [en]

    The introduction of embedded DSP blocks and embedded memory has made FPGAs an attractive architecture for implementation of real-time video processing systems. The big bottle neck of the FPGA compared to other programmable architectures is the complex programming model. This paper presents an automatic generation of placement and routing constraints for FPGA implementation of real-time video processing systems as one step to automate the programming model. The constraint generator targets lower power consumption, better resource utilization and reduced development time. Results show that a 28 % reduction in dynamic power can be achieved using the proposed approach over traditional logic to memory mapping.

  • 43.
    Lawal, Najeem
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Thörnberg, Benny
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    O'Nils, Mattias
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Norell, Håkan
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Ram allocation algorithm for video processing applications on FPGA2006In: Journal of Circuits, Systems and Computers, ISSN 0218-1266, Vol. 15, no 5, p. 679-699Article in journal (Refereed)
    Abstract [en]

    This paper presents an algorithm for the allocation of on-chip FPGA Block RAMs for the implementation of Real-Time Video Processing Systems. The effectiveness of the algorithm is shown through the implementation of realistic image processing systems. The algorithm, which is based on a heuristic, seeks the most cost-effective way of allocating memory objects to the FPGA Block RAMs. The experimental results obtained, show that this algorithm generates results which are close to the theoretical optimum for most design cases.

  • 44.
    Malik, Abdul Waheed
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Thörnberg, Benny
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Cheng, Xin
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Lawal, Najeem
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Real-time Component Labelling with Centre of Gravity Calculation on FPGA2011In: 2011 Proceedings of Sixth International Conference on Systems, 2011Conference paper (Refereed)
    Abstract [en]

    In this paper we present a hardware unit for real time component labelling with Centre of Gravity (COG) calculation. The main targeted application area is light spots used as references for robotic navigation. COG calculation can be done in parallel with a single pass component labelling unit without first having to resolve merged labels. We present hardware architecture suitable for implementation of this COG unit on Field programmable Gate Arrays (FPGA). As result, we get high frame speed, low power and low latency. The device utilization and estimated power dissipation are reported for Xilinx Virtex II pro device simulated at 86 VGA sized frames per second. Maximum speed is 410 frames per second at 126 MHz clock.

  • 45.
    Malik, Abdul Waheed
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Thörnberg, Benny
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Imran, Muhammad
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Lawal, Najeem
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    Hardware Architecture for Real-time  Computation of Image Component Feature Descriptors on a FPGA2014In: International Journal of Distributed Sensor Networks, ISSN 1550-1329, E-ISSN 1550-1477, p. Art. no. 815378-Article in journal (Refereed)
    Abstract [en]

    This paper describes a hardwarearchitecture for real-time image component labelingand the computation of image component featuredescriptors. These descriptors are object relatedproperties used to describe each image component.Embedded machine vision systems demand a robustperformance, power efficiency as well as minimumarea utilization, depending on the deployedapplication. In the proposed architecture, the hardwaremodules for component labeling and featurecalculation run in parallel. A CMOS image sensor(MT9V032), operating at a maximum clock frequencyof 27MHz, was used to capture the images. Thearchitecture was synthesized and implemented on aXilinx Spartan-6 FPGA. The developed architecture iscapable of processing 390 video frames per second ofsize 640x480 pixels. Dynamic power consumption is13mW at 86 frames per second.

  • 46.
    Meng, Xiaozhou
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Haoming, Zeng
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Lawal, Najeem
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Thörnberg, Benny
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Portability Analysis of Soft Microprocessor for FPGA2012In: 2012 Mediterranean Conference on Embedded Computing, MECO 2012, IEEE conference proceedings, 2012, p. 5-8Conference paper (Refereed)
    Abstract [en]

    This paper discusses the portability issues of soft microprocessor used on FPGA platform. The problems of maintaining a long life cycle system related to soft microprocessor’s portability is emphasized. Three soft microprocessors’ portability was analyzed in the experiments, which represent three types of soft microprocessor groups. The result shows that the system with commercial licensed vendor independent soft microprocessor possesses higher portability and reliability and it is the preferred alternative for designing a long life cycle system. The result from the paper can give guidance to the designers who suffer from microprocessor obsolescence problems.

  • 47.
    Meng, Xiaozhou
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Thörnberg, Benny
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Lawal, Najeem
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Embedded System Design with Maintenance  Consideration2011In: Proceedings of the 34th International Convention on Information and Communication Technology, Electronics and Microelectronics, MIPRO, IEEE conference proceedings, 2011, p. 124-129Conference paper (Refereed)
    Abstract [en]

    This paper deals with the problems of maintaining a long lifetime embedded system, including obsolescence, function change requirement or technology migration etc. The aim of the presented work is to analyze the maintainability of long lifetime embedded systems for different design technologies. FPGA platform solutions are proposed in order to ease the system maintenance. Different platform cases are evaluated by analyzing the essence of each case and the consequences of different risk scenarios during system maintenance. Finally, the conclusion is drawn that the FPGA platform with vendor and device independent soft IP is the best choice.

  • 48.
    Meng, Xiaozhou
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Thörnberg, Benny
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Lawal, Najeem
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Portability analysis of an M-JPEG decoder IP from OpenCores2011In: SIES 2011 - 6th IEEE International Symposium on Industrial Embedded Systems, Conference Proceedings, IEEE conference proceedings, 2011, p. 79-82Conference paper (Refereed)
    Abstract [en]

    The reuse of predefined Intellectual Property (IP) can shorten development times and help the designer to meet time-to-market requirements for embedded systems. Using FPGA IP in a proper way can also mitigate the component obsolescence problem. System migration between devices is unavoidable, especially for long lifetime embedded systems, so IP portability becomes an important issue for system maintenance. This paper presents a case study analyzing the portability of an FPGA-based M-JPEG decoder IP. The lack of any clear separation between computation and communication is shown to limit the decoder's portability with respect to different communication interfaces. Technology and tool dependent firmware IP components are often supplied by FPGA vendors. It is possible for these firm IP components to reduce development time. However, the use of these technology and tool dependent firmware specifications within the M-JPEG decoder is shown to limit the decoder's portability with respect to development tools and FPGA vendors.

  • 49.
    Meng, Xiaozhou
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Thörnberg, Benny
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Lawal, Najeem
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Soft-IP Interface Modification Methodology2011In: Proceedings of 2011 International Conference on Information and Electronics Engineering, 2011Conference paper (Refereed)
    Abstract [en]

    The reuse of predefined Intellectual Property (IP) can lead to great success in system design and help the designer to meet time-to-market requirements. A soft IP usually needs some customization and integration efforts rather than plug-and-play. Communication interface mismatch is one of the problems that integrators often meet. This paper suggests a soft-IP interface modification methodology (SIPIMM) for systems on Field Programmable Gate Array (FPGA) SIPIMM targets an interface-based soft IP model which is introduced to ease the interface modification and interface reuse. A case study of an open-source IP is presented using SIPIMM for system integration.

  • 50.
    Norell, Håkan
    et al.
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Lawal, Najeem
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    O'Nils, Mattias
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Automatic Generation of Spatial and Temporal Memory Architectures for Embedded Video Processing Systems2007In: EURASIP Journal on Embedded Systems, ISSN 1687-3955, E-ISSN 1687-3963, Vol. 2007, article id 75368Article in journal (Refereed)
    Abstract [en]

    This paper presents a tool for automatic generation of the memory management implementation for spatial and temporal real-time video processing systems targeting field programmable gate arrays (FPGAs). The generator creates all the necessary memory and control functionality for a functional spatio-temporal video processing system. The required memory architecture is automatically optimized and mapped to the FPGAs' memory resources thus producing an efficient implementation in terms of used internal resources. The results in this paper show that the tool is able to efficiently and automatically generate all required memory management modules for both spatial and temporal real-time video processing systems.

12 1 - 50 of 54
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf