2017 MATLAB Projects at Chennai

Looking for 2017 Matlab Project,Click Here or Contact @ +91 9894220795/+9144 42647783.For more details visit www.verilogcourseteam.com

Apr 5, 2017

Design and Analysis of Low-Complexity Multi-Antenna Relaying for OFDM-Based Relay Networks




Design and Analysis of Low-Complexity Multi-Antenna Relaying for OFDM-Based Relay Networks

Abstract—A class of low-complexity multi-antenna relaying schemes is proposed for orthogonal frequency division multiplexing based one-way and two-way relay networks. In the proposed schemes, two single-antenna terminal nodes complete the data transfer in two time phases via a multi-antenna amplify-and forward relay without channel state information, discrete Fourier transform (DFT), and inverse DFT. The relay signal processing consists of receive combining, power scaling, and transmit diversity, where instantaneous time domain power scaling is proposed for power scaling and power-based selection combining and cyclic delay combining are proposed for receive combining, to leverage the performance only with time domain operations. An approximation to the outage probability is also derived to predict the coded performance behavior under practical channel models.

PROJECT FLOW
  1. Generate Message Data
  2. Select No of Relay and protocols (OWR and TWR refer fig.1)
  3. Transmit data to AWGN Channel
  4. Apply Receiver process (Refer Fig2)
  5. Find Outage Probability and BER
  6. Plot the Result
  
Fig. 1. System model: (a) Relay network and (b) OWR and TWR protocols.
Proposed Multi-Antenna Relaying Schemes
Fig. 2 describes a general structure of the proposed multi-antenna relaying schemes which consists of receive combining, power scaling, and transmit diversity, all performed in the time domain.
  1. Receive Combining
  2. Power Scaling
  3. Transmit Diversity
  
Fig. 2. Block diagram of the signal processing units of the relays:proposed multi-antenna relaying. 

Simulation Video Demo

Apr 4, 2017

A Reversible Watermarking Technique for Social Network Data Sets for Enabling Data Trust in Cyber, Physical, and Social Computing

A Reversible Watermarking Technique for Social Network Data Sets for Enabling Data Trust in Cyber, Physical, and Social Computing

Abstract—Social network data are being mined for extracting interesting patterns. Such data are collected by different researchers and organizations and are usually also shared via different channels. These data usually have huge volume because there are millions of social network users throughout the world. In this context, ownership protection of such data sets with huge volume becomes relevant. Digital watermarking is a more demanding solution than any other technique for ensuring rights protection and integrity of the original data sets. The objective of this project is to devise a reversible watermarking technique for the social network data to prove ownership rights and also provide a mechanism for data recovery. Robustness of the proposed technique is evaluated through two attacks (delete and insert) analysis using experimental study. In  this project, Z notation-based formal specification is also provided to show the working of the proposed reversible watermarking technique for social network data sets for enabling data trust in Cyber, Physical, and Social Computing (CPSCom). 

PROPOSED TECHNIQUE
The main architecture of the proposed technique is presented in below fig It includes the following four major phases: 1) pre-processing; 2) watermark encoding; 3) watermark decoding; and 4) data recovery.
 
Main architecture of the proposed technique.

The proposed technique handles big data in a systematic way with logical grouping. For instance, for nonnumerical/numerical data encoding, similar records for the same employees get hashed to same values and make logical groups; thus, the data size does not increase. Moreover, the data owner has more interest in acquiring ownership rights and data recovery. She gets her data watermarked only once, and watermarking is usually done offline; this is to say that it is done on the machine of the data owner and data are delivered to the recipient only after completing the embedding process. Thus, she can afford large computation time for providing ownership rights and data recovery. For datasets involving a large number of features or a large number of rows “Big Data,” the data owner may use a
separate machine, with high computation power, to watermark the datasets. This might incur some cost but gain the owner more security (false claim of ownership can be tackled by watermark encoding and decoding). The computational time of the proposed technique is (ω ∗ R ∗ F) where ω is the watermark length, R is the total number of tuples in the dataset, and F is the feature selected for watermarking. The number of tuples is usually much larger compared with the number of features in databases and the watermark length ω; hence, F R and ω R. Therefore, for large databases, (R termed as n) the time complexity of the proposed technique for watermark encoding and decoding is O(n) (it is worth mentioning here that the time for computing the GA-based optimal value is not included in this calculation because it is part of preprocessing.   

Simulation Video Demo 

VCT App now Available

VCT App now Available
@ Play Store