Real Time And Secure Video Transmission

Print   

02 Nov 2017

Disclaimer:
This essay has been written and submitted by students and is not an example of our work. Please click this link to view samples of our professional work witten by our professional essay writers. Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of EssayCompany.

Abstract

Digital transmission increases day by day. In multimedia technology transmitting digital data must be secure, private and fast. This paper introduced efficient and secure real time transmission by using parallel and distributed approach for fast transmission of data which is used in conferences , video on demand etc. This papers aims to make video encryption feasible for real time application without using any extra dedicated at receiver side, for feasibility this paper introduce some new technique which is Open MPI and Open MP.

Keywords:- open MPI, Open MP, TAPI protocol, AES encryption, HEX string.

1. Introduction

Advances in digital content transmission have increased in the past few years. Security and privacy issues of the transmitted data have become an important concern in multimedia technology. The given project proposes a computationally efficient and secure video encryption algorithm with use of distributed & parallel environment. The project aims to make secure video encryption feasible for real-time applications without any extra dedicated hardware at receiver side.

Video conferencing is most popular application of video transmission for better conferencing transmission must be fast and secure. Use Video conferencing as a collaborative communication tool to communicate with several individuals or groups in real time across different locations Conducting a conference between two or more participants at different sites by using computer networks to transmit audio and video data.  Each participant has a video camera, microphone, and speakers mounted on his or her computer. As the two participants speak to one another, their voices are carried over the network and delivered to the other's speakers, and whatever images appear in front of the video camera appear in a window on the other participant's monitor. Our aim is to send video and audio to participant’s parallely without keeping any participant’s ideal, for such parallel and distributed approach this paper introduce new technique which transmit video with high quality, secure and fast in speed, this new introduced techniques are Open MPI and Open MP.

The detail about Open MPI and Open MP will see below.

2. Methodology

Before we begin with Open MPI and Open MP, it is important to know why we need parallel processing. In a typical case, a sequential code will execute in a thread which is executed on a single processing unit. Thus, if a computer has 2 processors or more (or 2 cores, or 1 processor with Hyper Threading), only a single processor will be used for execution, thus wasting the other processing power. Rather than letting the other processor sit idle (or process other threads from other programs), we can use it to speed up our algorithm.

Parallel processing can be divided into two groups, task based and data based.

Task based: Divide different tasks to different CPUs to be executed in parallel. For example, a Printing thread and a Spell Checking thread running simultaneously in a word processor. Each thread is a separate task.

Data based: Execute the same task, but divide the work load on the data over several CPUs. For example, to convert a color image to grayscale. We can convert the top half of the image on the first CPU, while the lower half is converted on the second CPU (or as many CPUs as you have), thus processing in half the time.

There are several methods to do parallel processing:

Use MPI: MPI stands for the Message Passing Interface MPI is a standardized API typically used for parallel and/or distributed computing.

The Open MPI Project is an open source MPI-2 implementation that is developed and maintained by a consortium of academic, research, and industry partners. Open MPI is therefore able to combine the expertise, technologies, and resources from all across the High Performance Computing community in order to build the best MPI library available. Open MPI offers advantages for system and software vendors, application developers and computer science researchers.

Features implemented or in short-term development for Open MPI include:

Full MPI-2 standards conformance

Thread safety and concurrency

Dynamic process spawning

Network and process fault tolerance

Support network heterogeneity

Single library supports all networks

Run-time instrumentation

Many job schedulers supported

Many OS's supported (32 and 64 bit)

Production quality software

High performance on all platforms

Portable and maintainable

Tunable by installers and end-users

Component-based design, documented APIs

Active, responsive mailing list

Open source license based on the BSD license

Several top-level goals:

Create a free, open source, peer-reviewed, production-quality complete MPI-2 implementation.

Provide extremely high, competitive performance (latency, bandwidth, pick your favorite metric).

Directly involve the HPC community with external development and feedback (vendors, 3rd party researchers, users, etc.).

Provide a stable platform for 3rd party research and commercial development.

Help prevent the "forking problem" common to other MPI projects.

Support a wide variety of HPC platforms and environments.

Open MPI is based upon a component architecture; support for its MPI point-to-point functionality only utilize a small number of components at run-time. Adding native support for a new network interconnects was specifically designed to be easy.

Here's the list of networks that we natively support for point-to-point communication:

TCP / Ethernet

Shared memory

Loopback (send-to-self)

Myrinet / GM

Myrinet / MX

Infiniband / OpenIB

Infiniband / mVAPI

Portals

Use Open MP: Open MP is an implementation of multithreading, a method of parallelizing whereby a master thread (a series of instructions executed consecutively) forks a specified number of slave threads and a task is divided among them. The threads then run concurrently, with the runtime environment allocating threads to different processors.

The section of code that is meant to run in parallel is marked accordingly, with a preprocessor directive that will cause the threads to form before the section is executed. Each thread has an idattached to it which can be obtained using a function (called omp_get_thread_num ()). The thread id is an integer, and the master thread has an id of 0. After the execution of the parallelized code, the threads join back into the master thread, which continues onward to the end of the program.

By default, each thread executes the parallelized section of code independently. Work-sharing constructs can be used to divide a task among the threads so that each thread executes its allocated part of the code. Both task parallelism and data parallelism can be achieved using OpenMP in this way.

Understanding the Fork-and-Join Model

OpenMP uses the fork-and-join parallelism model. In fork-and-join, parallel threads are created and branched out from a master thread to execute an operation and will only remain until the operation has finished, then all the threads are destroyed, thus leaving only one master thread.

The process of splitting and joining of threads including synchronization for end result are handled by OpenMP.

D:\fork_join1.gif

Advantages and disadvantages

Advantages

Portable multithreading code (in C/C++ and other languages, one typically has to call platform-specific primitives in order to get multithreading)

Simple: need not deal with message passing as MPI does

Data layout and decomposition is handled automatically by directives.

Incremental parallelism: can work on one part of the program at one time, no dramatic change to code is needed.

Unified code for both serial and parallel applications: OpenMP constructs are treated as comments when sequential compilers are used.

Original (serial) code statements need not, in general, be modified when parallelized with OpenMP. This reduces the chance of inadvertently introducing bugs.

Both coarse-grained and fine-grained parallelism are possible.

In irregular multi-physics applications which do not adhere solely to the SPMD mode of computation, as encountered in tightly coupled fluid-particulate systems, the flexibility of OpenMP can have a big performance advantage over MPI.

Disadvantages

Risk of introducing difficult to debug synchronization bugs and race conditions.

Currently only runs efficiently in shared-memory multiprocessor platforms (see however Intel's Cluster OpenMP and other distributed shared memory platforms).

Requires a compiler that supports OpenMP.

Scalability is limited by memory architecture.

no support for compare-and-swap

Reliable error handling is missing.

Lacks fine-grained mechanisms to control thread-processor mapping.

Can't be used on GPU

High chance of accidentally writing false sharing code

Multithreaded Executables often incur longer startup times so, they can actually run much slower than if compiled single-threaded, so, there needs to be a benefit to being multithreaded.

Often multithreading is used when there is no benefit yet the downsides still exist.

TAPI

As telephony and call control become more common in the desktop computer, a general telephony interface is needed to enable applications to access all the telephony options available on any computer. The media or data on a call must also be available to applications in a standard manner.

TAPI 3.0 provides simple and generic methods for making connections between two or more computers and accessing any media streams involved in that connection. It abstracts call-control functionality to allow different, and seemingly incompatible, communication protocols to expose a common interface to applications.

IP telephony is poised for explosive growth, as organizations begin a historic shift from expensive and inflexible circuit-switched public telephone networks to intelligent, flexible, and inexpensive IP networks. Microsoft, in anticipation of this trend, has created a robust computer telephony infrastructure, TAPI. Now in its third major version, TAPI is suitable for quick and easy development of IP telephony applications.

Video Encryption

Basically there two types of encryption

1. Public key cryptography

2. Private Key cryptography

Public key cryptography is not applicable for secure real time video conferencing because its operation requires an amount of time which is not suitable for video conferencing.

There are various private key encryption algorithms

Naïve algorithm: - It encrypts each and every byte of whole video stream. Which give more security level but it is not an applicable solution if size of data is large.

Selective algorithm :- video divided into 3 frames I P and B. this algorithm encrypting all headers and I (initial) frames, encrypting all I frames and all I blocks in P and B frames, and finally encrypting all frames as in Naive algorithm .

ZIG-ZAG algorithm: - It encrypts the algorithm before compressing them. It used random permutation once the permutation list is known; the algorithm will not be secure.

AES algorithm: - Advance Encryption Standers the AES algorithm is symmetric key cryptosystem that processes 128-bit data blocks using cipher keys with lengths of 128, 192, or 256 bits. It is more scalable and can handle different key sizes and data block sizes, however they are not included in the standard. Also the basic blocks of AES operation are shown in figure.

The SubBytes step

In the SubBytes step, each byte in the state is replaced with its entry in a fixed 8-bit lookup table, S; bij = S(aij). In the SubBytes step, each byte in the state matrix is replaced with a SubByte using an 8-bit substitution box, theRijndael S-box. This operation provides the non linearity in the cipher. The S-box used is derived from the multiplicative inverse over GF(28), known to have good non-linearity properties. To avoid attacks based on simple algebraic properties, the S-box is constructed by combining the inverse function with an invertible affine transformation. The S-box is also chosen to avoid any fixed points (and so is a derangement), and also any opposite fixed points.

The ShiftRows step

In the ShiftRows step, bytes in each row of the state are shifted cyclically to the left. The number of places each byte is shifted differs for each row. The ShiftRows step operates on the rows of the state; it cyclically shifts the bytes in each row by a certain offset. For AES, the first row is left unchanged. Each byte of the second row is shifted one to the left. Similarly, the third and fourth rows are shifted by offsets of two and three respectively. For blocks of sizes 128 bits and 192 bits, the shifting pattern is the same. Row n is shifted left circular by n-1 bytes. In this way, each column of the output state of the ShiftRowsstep is composed of bytes from each column of the input state. (Rijndael variants with a larger block size have slightly different offsets). For a 256-bit block, the first row is unchanged and the shifting for the second, third and fourth row is 1 byte, 3 bytes and 4 bytes respectively—this change only applies for the Rijndael cipher when used with a 256-bit block, as AES does not use 256-bit blocks. The importance of this step is to make columns not linear independent if so, AES becomes four independent block ciphers.

The MixColumns step

In the MixColumns step, each column of the state is multiplied with a fixed polynomial c(x).

In the MixColumns step, the four bytes of each column of the state are combined using an invertible linear transformation. The MixColumns function takes four bytes as input and outputs four bytes, where each input byte affects all four output bytes. Together with ShiftRows,  MixColumns  provides diffusion in the cipher. During this operation, each column is multiplied by the known matrix that for the 128-bit key is:

\begin{bmatrix} 2 & 3 & 1 & 1 \\ 1 & 2 & 3 & 1 \\ 1 & 1 & 2 & 3 \\ 3 & 1 & 1 & 2 \end{bmatrix}

The multiplication operation is defined as: multiplication by 1 means no change, multiplication by 2 means shifting to the left, and multiplication by 3 means shifting to the left and then performing xor with the initial unshifted value. After shifting, a conditional xor with 0x11B should be performed if the shifted value is larger than 0xFF.

In more general sense, each column is treated as a polynomial over GF(28) and is then multiplied modulo x4+1 with a fixed polynomial c(x) = 0x03 · x3 + x2 + x + 0x02. The coefficients are displayed in their hexadecimal equivalent of the binary representation of bit polynomials from GF(2) [x]. The MixColumns step can also be viewed as a multiplication by a particular MDS matrix in a finite field. This process is described further in the article Rijndael mix columns.

The AddRoundKey step

In the AddRoundKey step, each byte of the state is combined with a byte of the round subkey using the XORoperation (⊕). In the AddRoundKey step, the subkey is combined with the state. For each round, a subkey is derived from the mainkey using Rijndael's key schedule; each subkey is the same size as the state. The subkey is added by combining each byte of the state with the corresponding byte of the subkey using bitwise XOR.

Implementation

Fig (a) shows actual real time video snapshot on sender’s computer. Fig (b) shows video snapshot on receiver’s computer without applying any technique fig (c) shows video snapshot on receiver’s computer after applying Open MPI and Open MP.

.

Whenever sender send video to the receiver’s video mainly divided into frames(120 frames/sec) computer is an digital electronic devices with use of binary encoder that frames converted into binary format i.e 0100011000100 this is a 13 bit binary number it take too much time for transmission Open MPI and Open MP convert that binary bit to hex string i.e. 0100 0110 0010 it is only 3 bit string now it is easy to transfer and require very less time as compare to 13 bit. There are many receiver’s who send request to the sender MPI_Comm_accept establishes communication with a receiver. It is collective over the calling communicator. It returns an intercommunicator that allows communication with the receiver, after the receiver has connected with the MPI_Comm_accept function using the MPI_Comm_connect function. Many programs will be written with the master-slave model, where one process (such as the rank-zero process) will play a supervisory role, and the other processes will serve as compute nodes. In this framework, MPI_Comm_size and MPI_Comm_rank are useful for determining the roles of the various processes of a communicator. When connection establish through server’s channel Open MP provide multi thread, that multi thread work parallel in manner so instead of sending video one by one Open MP sends it in parallel which give fast transmission at receiver’s side decoder decodes HEX string to binary and then binary bit to frames continuous motion frames is nothing but a video.

4. Conclusion

The paper has discussed and introduced the new technique i.e. Open MPI and Open MP for parallel and distributed approach, by considering video conferencing as an application the Protocol TAPI and various Encryption methods available for real time video transmission. This paper shows the experimental result by taking video snapshot this new technique gives fast video transmission and secure from hackers by applying AES encryption algorithm.



rev

Our Service Portfolio

jb

Want To Place An Order Quickly?

Then shoot us a message on Whatsapp, WeChat or Gmail. We are available 24/7 to assist you.

whatsapp

Do not panic, you are at the right place

jb

Visit Our essay writting help page to get all the details and guidence on availing our assiatance service.

Get 20% Discount, Now
£19 £14/ Per Page
14 days delivery time

Our writting assistance service is undoubtedly one of the most affordable writting assistance services and we have highly qualified professionls to help you with your work. So what are you waiting for, click below to order now.

Get An Instant Quote

ORDER TODAY!

Our experts are ready to assist you, call us to get a free quote or order now to get succeed in your academics writing.

Get a Free Quote Order Now