Monday, October 30, 2017

Web Intelligence And Agent Systems: An International Journal - IEEE Paper

Summary: Web Intelligence and Agent Systems: An International Journal covers Web and agent intelligence systems.

Web Intelligence and Agent Systems: An International Journal (WIAS) is an official journal of Web Intelligence Consortium (WIC), an international organization dedicated to promoting collaborative scientific research and industrial development in the era of Web and agent intelligence. WIAS seeks to collaborate with major societies and international conferences in the fields. Presently, it has established a tie with the International Conference on Web Intelligence and the International Conference on Intelligent Agent Technology. WIAS is a peer-reviewed journal, which publishes 4 issues a year, in both electronic and hard copies.

Best Mobile Phones to Buy in India (All pirce)
Best Smartphone under ₹20,000 in India
WIAS aims to achieve a disciplinary balance between Web technology and intelligent agent technology. It is committed to deepening the understanding of computational, logical, cognitive, physical, and social foundations as well as the enabling technologies for developing and applying Web-based intelligence and autonomous agents systems. The journal features high-quality, original research papers (including state-of-the-art reviews), brief papers, and letters in all theoretical and technology areas that make up the field of WIAS.

Publisher  : IOS Press
More information http://www.iospress.nl/loadtop/load.php?isbn=15701263




Keywords agent infrastructure, agent architecture, agent self-organization, agent-based knowledge discovery, agent-mediated markets, autonomy-oriented computing, cooperative problem solving, distributed intelligence, web-based systems, computing paradigms, agent-based web intelligence technologies, grid intelligence, information ecology, knowledge management, networks, middleware, ontology engineering, personalization techniques, semantic web, web services and interoperability, ubiquitous computing, social intelligence, web information filtering, web information retrieval, web mining, web farming, wisdom web

Intelligent Virtual Agents for Education and Training: Opportunities and Challenges - Abstract - IEEE Paper

Virtual Agents: “The current darling of the media,” says Forrester (I believe they refer to my evolving relationships with Alexa), from simple chatbots to advanced systems that can network with humans. Currently used in customer service and support and as a smart home manager. Sample vendors: Amazon, Apple, Artificial Solutions, Assist AI, Creative Virtual, Google, IBM, IPsoft, Microsoft, Satisfi.

Saturday, October 28, 2017

Motorola - Moto G5S Plus 4G LTE with 64GB Memory smartPhone ( The better at the mid-range )

Motorola Moto G5S Plus smartphone wasa flagship launched in August 2017. The phone comes with a 5.50-inch touchscreen display with a resolution of 1080 pixels by 1920 pixels. Motorola Moto G5S Plus price in India starts from Rs. 15,999 ( Amazon Exclusive) .

The Motorola Moto G5S Plus is powered by 2GHz octa-core Qualcomm Snapdragon 625 processor and it comes with 4GB of RAM. The phone packs 64GB of internal storage that can be expanded up to 128GB via a microSD card. As far as the cameras are concerned, the Motorola Moto G5S Plus packs a Dual 13-megapixel primary camera on the rear and a 8-megapixel front shooter for selfies.

The Motorola Moto G5S Plus runs Android 7.1 and is powered by a 3000mAh non removable battery. It measures 153.50 x 76.20 x 9.50 (height x width x thickness) and weigh 168.00 grams.

The Motorola Moto G5S Plus is a dual SIM (GSM and GSM) smartphone that accepts Nano-SIM and Nano-SIM. Connectivity options include Wi-Fi, GPS, Bluetooth, USB OTG, 3G and 4G (JIO support in INDIA). Sensors on the phone include Proximity sensor, Accelerometer, Ambient light sensor, NFC and Gyroscope.

OUR VERDICT
More metal, more screen, good battery life and a still-great price make the Moto G5S Plus one of the best affordable Androids around.

FOR

  • Upgraded full metal shell
  • Very good value
  • Good battery life

AGAINST

  • Significant camera shutter lag
  • Battery no bigger than G5S
  • Occasional app crashes


Mid-range specs at an affordable price
  • High-quality metal build
  • A 5.5-inch screen
                                                                                                                   

Thursday, March 17, 2016

IMPLEMENTATION OF OCR USING NEURAL NETWORK IEEE Paper and Paper Presentation

Abstract

The aim of the project is to develop OCR software for Tamil character recognition. OCR is an optical character recognition and is the mechanical or electronic translation of images of typewritten or handwritten (usually captured by a scanner) into machine-editable text. OCR is a field of research in pattern recognition, artificial intelligence and machine vision. Character recognition is used most often to describe the ability of computer to translate printer or human writing into text. In this paper we focus on recognition of English alphabet in a given scanned text document with the help of Neural Networks. Using Mat labNeural Network toolbox, we are going to recognize handwritten characters by projecting them on different sized grids. The first step is image acquisition which acquires the scanned image followed by noise filtering, smoothing and normalization of scanned image, rendering image suitable for segmentation where image is decomposed into sub images. Feature Extraction improves recognition rate and misclassification. We going to use character extraction and edge detection algorithm for training the neural network to classify and recognize the handwritten character. Existing Applications which are similar to our application contain many mismatches and errors that will be rectified in our project which increases the accuracy of the text character recognition. 


IMPLEMENTATION OF TAMIL-Optical character recognition USING NEURAL NETWORK
 Objectives

  • The objective of this project is to develop robust OCR's for printed Tamil scripts, which can deliver desired performance for possible conversion of legacy, printed documents into electronically accessible format.
  • It should be boon to the people who uses the proprietary OCR’s.
  • It will be good arrival to magazine industry.


 Abstract

  • The aim of the project is to develop OCR software for Tamil character recognition. 
  • OCR is an optical character recognition and translation of images of typewritten or handwritten (usually captured by a scanner) into machine-editable text.
  • In this project, the focus is on recognition of Tamil alphabet in a given scanned text document with the help of Neural Networks.
  • Handwritten characters are recognized by projecting them on different sized grids using Java. 


 Introduction

  • OCR, is the process of translating images of handwritten, typewritten, or printed text into a format understood by machines for the purpose of editing, indexing/searching, and a reduction in storage size. 
  • Optical Character Recognition that would use an Artificial Neural Network as the backend to solve the classification problem. 
  • The input for the OCR problem is pages of scanned text.


 Product Functions

  • Tamil Optical Character Recognition converts the text image into text document.
  • OCR that includes a way to edit the text directly.
  • It enables the users to store the text in as a separate file in the system.


 Applications of Tamil-OCR 

  • Data entry for business documents, e.g. check, passport, invoice, bank statement and receipt
  • Automatic insurance documents key information extraction
  • Extracting business card information into a contact list
  • More quickly make textual versions of printed documents, e.g. book scanning for Project Gutenberg
  • Make electronic images of printed documents searchable, e.g. Google Books
  • Defeating CAPTCHA anti-bot systems, though these are specifically designed to prevent OCR

 Existing system

  • In the running world there is a growing demand for the users to convert the printed documents in to electronic documents for maintaining the security of their data. 
  • Hence the basic OCR system was invented to convert the data available on papers in to computer process able documents, So that the documents can be editable and reusable. 
  • The existing system is not efficient for the language Tamil and also have lots of errors in detecting the characters.
  • The existing system consume much more time to recognize the characters from the image.


 Proposed System

  • The proposed Tamil Optical Character Recognition will perform some serious of operation to perform the Recognition process easier and accurate.
  • To perform the recognition of  character faster.
  • To obtain complete accuracy in the text recognition.
  • To develop OCR for Tamil language.


 Literature Survey

[1] Kauleshwar Prasad, Devvrat C. Nigam, Ashmika Lakhotiya and Dheeren Umre (2013)“Character Recognition Using Neural Network Toolbox”, International Journal of u- and e- Service, Science and Technology
  • This paper focus on recognition of alphabet in a given scanned text document with the help of neural Networks.
  • Here we use character extraction and edge detection algorithm for training the neural network to classify and recognize the characters.

[2]VenuGovindaraju, SrirangarajRangaSetlur (2013) “Guide to OCR for Indic Scripts: Document Recognition and Retrieval”. International Journal of Advanced Research in Computer Science and Software Engineering

  • It helps in developing a new approach to deal with the problem with indic scripts.


[3]Java Neural Network Framework Neuroph,Link: “http: //sourceforge.net/projects/neuroph/?source=directory “
  • The above website provides the informations about the neuroph.



 Operating Environment
Software Requirements:

  • Windows/Linux: The Tamil optical character recognition application using neural network will operates on windows (XP/7/8), Linux. All device that supports the version of the windows or Linux operating system will be able to run the software.
  • NetBeans: NetBeans’s extensive GUI features/toolkits make GUI development easy and flexible. The software is developed using NetBeans.
  • Open Office: Open office is the leading open-source office suite for word processing.


 System Features
Language Auto Detection

  • Tamil-Optical Character Recognition will detect the language based on Tamil Unicode range.
  • Tamil characters fall within a specific Unicode range.

Character Mapping

  • The Tamil-OCR will automatically maps the character by defining the box for each character.
  • The space will act as a delimiter. 
Best Mobile Phones under 10,000 Rs in India
10 Best Smartphones Under 15,000 Rs in India

Font & Style Detection

  • The OCR will automatically detects the font style and the size of the font. 






MODULE
Module 1: Image Acquisition

  • In Image acquisition, the recognition system acquires a scanned image as an input image. 
  • The image should have a specific format such as JPEG, GIF, etc. 

Module 2: Preprocessing

  • The role of pre-processing is to segment the interesting pattern from the background.
  • The noise filtering, smoothing and normalization should be done in this step.

Module 3: Segmentation

  • An image of sequence of characters is decomposed into sub-images of individual character. 
  • This labelling provides information about number of characters in the image.




 Module 4: Feature Extraction

  • The features of the characters that are crucial for classifying them at recognition stage are extracted.
  • Every character image is divided into equal zones.

Module 5: Classification and Recognition

  • A feed forward back propagation neural network is used in this work for classifying and recognizing the handwritten characters. 
  • The pixels derived from the resized character in the segmentation stage form the input to the classifier. 

Module 6: Post Preprocessing

  • Post-processing stage is the final stage of the proposed recognition system.
  • It prints the corresponding recognized characters in the structured text form.


 Design and Implementation Constraints

  • The system is designed to identify the character from image, it is necessary to train with each characters for many times. 
  • The training to improve the quality of recognition will pose a difficult challenge. 
  • Other constrains such a noise filtering and segmentation of each character are also worth considering. 
  • The application is meant to be accurate even when dealing with noisy data so each portion must be designed and implemented with efficiency in mind.
  • It can recognize only Tamil language.



 Assumptions and Dependencies

  • The training should be given for each characters with various size. 
  • It is necessary to convert the image into binary format.
  • We convert it into grayscale image, then by using the threshold of the grayscale it is converted to binary.


 Usecase Diagram
 Sequence Diagram
 Activity Diagram
 Class Diagram
 Tamil OCR-GUI
 Final View OCR-GUI
 Output OCR-GUI
 Result

  • The result of the project is that the Tamil-OCR is implemented for recognizing the Tamil text in the scanned image and to convert it into an editable text format. 
  • This increase the accuracy of the OCR process for the language Tamil. 
  • This increases the most of the Tamil press people to migrate towards free and open source software.
  • The GUI for Tamil-OCR will make easy for all the peoples who don’t have knowledge about the OCR to use it in a perfect manner.


 Conclusion

  • Our system will be developed for end users who have basic knowledge on Linux/Windows.
  •  It will perform intended operation under almost all circumstances. 
  • The GUI for Tamil-OCR ill make easy for all the peoples who don’t have knowledge about the use of OCR in a perfect manner.
  •  The process of the unit testing involves independent analysis of the system in parts or in units. 
  • This project is OS independent and hence people can work on their desire Operating System.


 References
  • Kauleshwar Prasad, Devvrat C. Nigam, Ashmika Lakhotiya and Dheeren Umre “Character Recognition Using Neural Network Toolbox”, International Journal of u- and e- Service, Science and Technology Vol. 6, No. 1, February, 2013.
  • Venu Govindaraju, Srirangaraj (Ranga) Setlur “ Guide to OCR for Indic Scripts : Document Recognition and Retrieval”.
  • Java Neural Network Framework Neuroph, Link: “http://sourceforge.net/projects/neuroph/?source=directory “






Tag:
 IMPLEMENTATION OF OCR USING NEURAL NETWORK IEEE Paper Download
IMPLEMENTATION OF OCR USING NEURAL NETWORK Paper Presentation ppt download.
IEEE Paper abstract and PPT Download for seminar and paper presentation. Project paper download

IMPLEMENTATION OF TAMIL-OCR USING NEURAL NETWORK IEEE PAPER DONLOAD

IMPLEMENTATION OF TAMIL-OCR USING NEURAL NETWORK

Abstract

The aim of the project is to develop OCR software for Tamil character recognition. OCR is an optical character recognition and is the mechanical or electronic translation of images of typewritten or handwritten (usually captured by a scanner) into machine-editable text. OCR is a field of research in pattern recognition, artificial intelligence and machine vision. Character recognition is used most often to describe the ability of computer to translate printer or human writing into text. In this paper we focus on recognition of English alphabet in a given scanned text document with the help of Neural Networks. Using Mat labNeural Network toolbox, we are going to recognize handwritten characters by projecting them on different sized grids. The first step is image acquisition which acquires the scanned image followed by noise filtering, smoothing and normalization of scanned image, rendering image suitable for segmentation where image is decomposed into sub images. Feature Extraction improves recognition rate and misclassification. We going to use character extraction and edge detection algorithm for training the neural network to classify and recognize the handwritten character. Existing Applications which are similar to our application contain many mismatches and errors that will be rectified in our project which increases the accuracy of the text character recognition.

Best smartphones priced below Rs.5000, Rs.10,000, Rs.15,000, Rs.20,000, Rs.30,000 and Flagships
Tag. 
Project paper for research and paper presentation. 
IEEE Paper for final year project
IEEE Pare On Palm Vein Technology  Abstract
Iee paper download for free
IMPLEMENTATION OF TAMIL-OCR USING NEURAL NETWORK fULL PAPE R DOWNLOAD

Integration of Sound Signature in Graphical Password Authentication System iEEE PAPER DOWNLOAD

Integration of Sound Signature in Graphical Password Authentication System

ABSTRACT
Here a graphical password system with a supportive sound signature to increase the remembrance of the password is discussed. In proposed work a click-based graphical password scheme called Cued Click Points (CCP) is presented. In this system a password consists of sequence of some images in which user can select one click-point per image. In addition user is asked to select a sound signature corresponding to each click point this sound signature will be used to help the user in recalling the click point on an image. System showed very good Performance in terms of speed, accuracy, and ease of use. Users preferred CCP to Pass Points, saying that selecting and remembering only one point per image was easier and sound signature helps considerably in recalling the click points. Keywords: Sound signature, Authentication

1. Introduction
Passwords are used for – (a) Authentication (Establishes that the user is who they say they are). (b) Authorization (The process used to decide if the authenticated person is allowed to access specific
information or functions) and (c) Access Control (Restriction of access-includes authentication & authorization). Mostly user select password that is predictable. This happens with both graphical and text based passwords. Users tend to choose memorable password, unfortunately it means that the passwords tend to follow predictable patterns that are easier for attackers to guess. While the predictability problem can be solved by disallowing user choice and assigning passwords to users, this usually leads to usability issues since users cannot easily remember such random passwords. Number of graphical password systems have been developed, Study shows that text-based passwords suffers with both security and usability problems[1][8]. According to a recent news article, a security team at a company ran a network password cracker and within 30 seconds and they identified about 80% of the passwords [2]. It is well know that the human brain is better at recognizing and recalling images than text[3][7], graphical passwords exploit this human characteristic.

2. PREVIOUS WORK

Considerable work has been done in this area,The best known of these systems are Passfaces [4][7]. Brostoff and Sasse (2000) carried out an empirical study of Passfaces, which illustrates well how a graphical password recognition system typically operates. Blonder-style passwords are based on cued
recall. A user clicks on several previously chosen locations in a single image to log in. As implemented by Passlogix Corporation (Boroditsky, 2002), the user chooses several predefined regions in an image as his or her password. To log in the user has to click on the same regions. The problem with this scheme is that the number of predefined regions is small, perhaps a few dozens in a picture. The password may have to be up to 12 clicks for adequate security, again tedious for the user. Another problem of this system is the need for the predefined regions to be readily identifiable. In effect, this requires artificial, cartoon-like images rather than complex, real-world scenes[5][6]. Cued Click Points (CCP) is a proposed alternative to PassPoints. In CCP, users click one point on each of 5 images rather than on five points on one image. It offers cued-recall and introduces visual cues that instantly alert valid users if they have made a mistake when entering their latest click-point (at which point they can cancel their attempt and retry from the beginning). It also makes attacks based on hotspot analysis more challenging. As shown in Figure 1, each click results in showing a next-image, in effect leading users down a “path” as they click on their sequence of points. A wrong click leads down an incorrect path, with an explicit indication of authentication failure only after the final click. Users can choose their images only to the extent that their click-point dictates the next image. If they dislike the resulting images, they could create a new password involving different click-points to get different images.

3. PROPOSED WORK 
In the proposed work we have integrated sound signature to help in recalling the password. No system has been devolved so far which uses sound signature in graphical password authentication. Study says that sound signature or tone can be used to recall facts like images, text etc[6]. In daily life we see various examples of recalling an object by the sound related to that object [6]. Our idea is inspired by this novel human ability.


Tag. 
Project paper for research and paper presentation. 
IEEE Paper for final year project
IEEE Pare On Palm Vein Technology  Abstract

Iee paper download for free

 
Animated Social Gadget - Blogger And Wordpress Tips