A Small Number Of Sectors Per Disk

Print   

02 Nov 2017

Disclaimer:
This essay has been written and submitted by students and is not an example of our work. Please click this link to view samples of our professional work witten by our professional essay writers. Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of EssayCompany.

IT Assignment

NAME:

AFFILIATION:

UNIVERSITY:

COURSE TITLE:

DATE OF SUBMISSION:

IT Assignment

Q1. What are the advantages and disadvantages of having a small number of sectors per disk cluster?

Disk drives are random or devices of direct access. The reason why they are direct or random is because each unit of their storage, which is known as a sector, holds a unique address. This address can be accessed, independent of the sectors that surround it. Sectors are in actual divisions of circles which are concentric and they are referred to as tracks. In most of the systems, every track holds approximately equal number of sectors (Null and Lobur, 2003).

Logical organization of a disk is a function that the OS uses. A prime component of this logical organization is the pattern in which these sectors have been mapped. A fixed disk may contain numerous sectors which unmanageable. The allocation table, that stores the status of each sector would consume a lot of space. This contributes towards a lot of overhead of disk space, and along with that, every time there is a need to check the status of the sector, which is referred to as retrieval time, it would consume a lot of time. If the status checking is a frequent task that is executed, it would be infeasible. For this very reason, the OS groups its address sectors in the form of blocks which are also named as clusters. This is done for the purpose of easing file management process (Patterson & Hennessy, 2009).

The number of sectors per cluster determines the size of the allocation table. Since the allocation table checks the status of the sector, the time consumed for tracking through the table is directly proportional to the size of the allocation table. If the allocation block has a small size, then there is less wastage of space for those files that do not fill into the entire block. On the contrary a disadvantage of having small block sizes is that it will increase the size of the allocation tables and will make them slower, during the status check of the sectors (Morley & Parker, 2012).

Q2. Discuss the following questions relative to compilers.

a.       Which phase of a compiler would give you a syntax error?

From the three phases of compiler, which are lexical analysis, syntax analysis and semantic analysis, it is the syntax analysis phase which determines any errors associated with the syntax. This kind of analysis is termed as parsing. There are tokens that are generated by the lexical analyzer. In the process of parsing, these tokens are then grouped together in order to construct a hierarchical structure. This hierarchical structure is called the parse tree. Parse tree is also known as a syntax tree. When the compiler is unable to generate a parse tree for the program, it will display a syntax error (Kakde, 2005).

b.      Which phase complains about undefined variables?

The Semantic Analysis of the compiler determines the syntactical aspects of the program. So it is actually trying to make sense out of the program, by understanding how the program is working. It follows after the syntax analysis. It performs crude consistency checks. Semantic analysis consists of many stages. One stage is the name analysis. In this analysis process, it identifies which variable is referring to which data, or whether it has been declared or not. This is done with the help of an abstract syntax tree. Information from this tree is converted to a symbol table with all the definitions and type of information (Apt, 2001).

c.        If you try to add an integer to a character string, which compiler phase would emit the error message?

Within the Semantic Analysis phase is a stage called type analysis. In this process, analysis is done by looking up at all the identifiers that have been identified using name analysis. Then the analysis makes sure that the types conform. This is known as compile-time type checking. It is an advantage to run time type checking, because it catches errors earlier to run time type checking. Hence, it is possible to produce more efficient code (Baldwin, 2003).

Q3. Describe in your own words the difference between the role of a system analyst and the role of a program analyst.

The study of complete or partial business systems, and the subsequent application of the information comprehended from the study to further designing, documentation and final implementation of new systems or improved systems is defined as system analysis. The person who performs this kind of analysis on business systems is termed as a system analyst. In comparison with a program analyst, a system analyst’s task is to identify the kind of technology required in particular situations, whereas a program analyst will determine the appropriate instructions to code down specific tasks (Downey, 2006). Both designations may work hand in hand and both are equally important in their respective areas.

The roles and responsibilities of a system analyst include:

Defining a problem and subsequently analyzing it

Collecting data for analysis

Requirements determination

Development of alternate solutions

Designing of test programs

Providing improvisations to the current system

Developing a new system

In a nutshell, a system analyst is defined as a problem solver. A programmer-analyst is not only responsible for the programming but also contributes to analysis of work. A programmer analyst programs or writes the software instructions to create the applications which can be stand alone or associated with server access or services-access. Along with the main idea of programming, they are also involved in designing phases, implementation as well as the testing phases of the schemas of the database. They are also involved in designing the business logic of the technological application (Downey, 2006). Designing is done based upon the requirements and functions of the business so that the desired results can be attained. A programmer analyst may have to be contacted for any built in errors occurring in the system as the one who does the coding and programming can solve all glitches.

Q4. Describe the process management and memory management activities performed by the Operating System.

It is possible to improve both the utilization of the CPU and hence increase the speed of the computer’s response for the users. In order to realize this performance boost, the CPU is required to manage several processes in memory by sharing the memory (Bussell and Taylor, 2006).

Memory management can be realized through the use of memory management algorithms. Following is the description of memory management activities.

Address Binding

The memory management unit (MMU) is a hardware device that performs the mapping from virtual to physical addresses. The logical address space is bound to the physical address space and this is a central concept to proper memory management. This mapping is done by the process of Dynamic Relocation, in the relocation register (Silberschatz, 2005).

Dynamic Loading

Rather than limiting the size of the process to the size of the physical memory, for the execution of the process, it could be possible to utilize the memory space more efficiently by using dynamic loading. With the use of the dynamic loading, a routine will not be loaded, until it is called. The routines are stored on the disk in load format that can be relocated. At the time of execution, the main program is loaded into the memory. The advantage of dynamic loading is that it will never load an unused routine. This helps when large amounts of code are handled, for example in error routines. OS consist of library routines that help in the implementation of dynamic loading (Silberschatz, 2005).

Fixed Partitioning

In this type of algorithm, memory is divided into fixed-size partitions. The OS accumulates the size in the memory with the lowest bytes. An improvised version of this partitioning algorithm includes swapping. Swapping involves a time quantum, which is specified for a process. When the time quantum for the relative process expires, the process is swapped out of memory to disk and the next process in the awaiting queue is swapped into the memory (Silberschatz, 2005).

Variable Partitioning

The issue that arises with fixed partitioning is to figure out the number as well as the sizes of partitions in order to minimize internal and external fragmentation. With the use of variable partitioning, it is possible to vary the partition sizes dynamically. A linked list table is used, which stores information regarding the used or the free areas available in memory. In the initial stage, the memory is considered totally free and is looked as one large block. When a new process approaches, it is the responsibility of the OS to search for a free block in the memory, sufficient enough in size for the process. The rest of the memory is kept ready to be consumed by future processes (Silberschatz, 2005).

Paging

With the use of Paging algorithm, it is possible to allow a program to allocate noncontiguous blocks of memory. This is made possible, as the OS divides the whole process into pages. These pages are small and they are of a fixed size. The OS then divides the physical memory into frames which are equivalent in size to the pages. Then OS maps the pages into the memory frames with the help of a page table (Silberschatz, 2005).

Segmentation

In this algorithm, the programs are divided into segments of variable size. In the paging algorithm, there were fixed sizes of pages. Rather than fixed, segmentation allows variability in the size. It achieves this by providing a logical address for each segment name and an offset which is within the segment. So the segments are numbered. The segmentation of the programs is done automatically by the compiler or the assembler (Silberschatz, 2005).

Paged Segmentation

This algorithm makes use of both the segmentation and paging principles. The segments are paged in order to resolve the external fragmentation problem (Silberschatz, 2005).

Process Management Activities

A process is a program that executes by making use of values in the program counter, the registers and the variables. In order to execute the program, resources are required by the process. Resources include CPU Time, memory, files and input/output devices. The responsibilities of the OS in relation to process and thread management are:

Process Scheduling

In order to maximize CPU utilization in a multiprogramming environment, the idea lies in sharing of time of the CPU among all the processes in such a frequent manner that the users are able to interact with all the programs while they are running. Queues are scheduled for this purpose. As the processes enter into the system, they are placed in a job queue. Those processes which are ready and are simply waiting to be executed are placed in a ready queue. The queue is stored in the form of a linked list. The OS also consists of many other queues. The processes that are waiting for the resource of the I/O, are placed in a device queue (Apt, 2001).

Deadlock

In this condition, the processes are waiting for other processes to free themselves of the CPU resources so that they can consume it. Deadlock is prevented in many ways. A process can give up a CPU in two discrete ways:

Non-preemptive:

In this method, a process voluntarily decides to give up on the CPU resources.

Pre-emptive:

In this method, a process is forced to give up the CPU. This could occur due to the presence of a higher priority process, or due to the fixed time slice usage of the CPU (Mitchell, 2001).

Operations on Processes

Process Creation

The processes in the system can be concurrently executed, so they can be created or deleted dynamically. The OS hence provides mechanisms for process creation and deletion on a dynamic level. Processes can create new processes using a system call of create-process during the course of their execution. The process which creates the process is called the parent process and the new processes created by the parent process are called the child processes. With the creation of further new processes, a hierarchical structure or a tree of processes is created. When a process is creating a child process, which is also known as its subprocess, the subprocess may be able to acquire the resources directly from the OS or they may be restricted to a limited set of resources from the parent process. This prevents overloading of the system (Roy and Haridi, 2004).

Process Termination

A process is able to terminate itself when the execution of its final statement is finished. It requests the OS to terminate it by using the exit system call. At that particular point, the process is able to return output data to the parent process. This is done using the wait system call. Meanwhile all of the resources that have been allocated to the process, which include the physical as well as the virtual memory, open files and I/O devices, all of them are de allocated by the OS. Termination can also be performed by the parent process. This is done using the abort system call (Silberschatz, 2005).

Cooperating process

The processes which are concurrent and execute in the OS can be of two types, independent or cooperating. They may be independent if the processes are not affected and are unable to affect other processes which are currently executing in the system. This is possible in the event, when processes do not share any kind of data. Cooperating processes are an opposite of independent processes. Data sharing is performed in such processes. Concurrent execution of cooperating processes requires mechanisms that allow inter-process communication as well as synchronization amongst each other (Silberschatz, 2005).

Q5. Storage systems increasingly rely on the internet infrastructure as a transport medium. What are the advantages of this approach? What problems are present in terms of security and reliability?

Data storage and security are important information technology issues that need to be catered by organizations for smooth functioning of business processes. Initially, the companies relied on hard drives, floppy disks, magnetic disks, CDs and various other tools for data storage. These were some typical storage devices. With the advent of time and introduction of fiber channels, the storage systems got advanced and two types of networks have been commonly used by the users i.e. Storage Area Networks (SAN) and Network Attached Storage (NAS) models (Business Roundtable, 2007). Both of these models ensure that the network is collaborative and supports faster and leaner access to large amount of stored data; they were developed with the aim of providing superior level of data storage, access and management system (Kher & Kim, 2005).

Currently, the latest trend in the market is of cloud computing and cloud storage as most of the companies have started their e-businesses and expanded into different parts of the world. The cloud storage service is mainly offered by third party companies as they take the responsibility of upgrading the software on an ongoing basis and present the customers with highly workable solutions for their storage systems (Null & Lobur, 2012). With the help of this system, all data is stored in one central location and it can be accessed by the concerned authorities easily. A central location makes it convenient for the employees to access all required data and also scan any data required for review.

The main advantages of this approach is that it allows the organizations to share the valuable information and data conveniently, decision making becomes faster as the key people who are at distant places can retrieve the information from anywhere and give their suggestions quickly. Quick decision making is required in certain circumstances by managers and hence in these circumstances data storage on one central location proves to be very useful for businesses. Along with this, there is a huge reduction in the data storage cost as the companies can avoid the hassle of upgrading or replacing the storage system as the service providers take care of these aspects of the complete system (Gopisetty et al., 2008).

The key problems that are encountered by the companies in this approach are related to the security and reliability of the data. As the organizations are dependent on third parties for data storage, there are chances that the service provider gets insolvent, its system can start to malfunction, the data can get lost and it is less secured as compared to the company’s controlled storage data center. When availing the services of the service provider, the companies will have to abide by its rules and regulations and even trust the professionalism and expertise of its technicians (Marinescu, 2012).

Hence, the options available for the companies regarding the storage systems show that they have their dependency on the Internet infrastructures. It is important for the companies to ensure that they select a reliable and trustworthy storage system service provider who will keep their data secured and protected.

Q6. What is meant by the term digital divide?

The digital divide is the segregation from experienced internet users from novices. Efficiency of internet usage or the belief of a user to properly organize the course of internet actions to achieve desired results is a crucial factor delineating the digital divide (Eastin and LaRose, 2000).

The digital divide separates the upper-middle or middle class users from the pre-dominantly minority of low-income users. This is a social equity issue that the information society is facing. This issue broadens on the international scope. The concept of digital divide has been primly defined in terms of the patterns of the ethnicity as well as the discrimination of the class that sets the offset of the unequal usage of the facilities and the information available on the medium of the internet. Setting apart the socio-economic as well as the racial barriers, the novice internet users face psychological issues as well. They feel less comfortable in the cyberspace medium, and they doubt their internet skills, which add on to their psychological issues. The relationship between efficiency and use of personal computers is subliminally straightforward (Compaine, 2001).

The idea of digital divide gained limelight in the late 90’s, around the same time when the internet gained popularity. Based on the understanding of the relationship of the internet to the changes in social and economic level, the digital divide approach emphasized on people getting connected on the internet, anyhow they could at all costs, for the purpose of not being left behind. In terms of economic level, internet economy was coined in with a lot of emphasis and it reflected a rage in e-business. On the society level, the main idea lied in the new concept of new concept of cyberspace (Norris, 2001).

Professed gaps are subsiding among various ethnic, as well as racial and geographical groups across the medium of the internet. There are two factors that primly validate the embedding of the internet technology. The first factor is the rapidly decreasing costs of its use and the second factor lies in the rapidly ease of the use of internet. If these trends exist, then the concept of the digital divide is expected to fade away (Gurstein, 2003).

As the technology of internet becomes more common and embeds in every application, it will bring in a lot of opportunities, which will eliminate the divide. This is actually the critical divide which is between those who can comprehend well and gain full advantage of the glories of the information that is so widely available on the internet medium, and those who are not literate enough and are unable to take advantage of the information resources which are readily and conveniently accessible (Gurstein, 2003).

It may be necessary to eliminate the digital divide differences so that the entire world internet users can be on the same platform and this can be done through educating users to use the internet via various modes of education and teaching skills. Business operations around the world may become smooth and efficient with elimination of digital divide.



rev

Our Service Portfolio

jb

Want To Place An Order Quickly?

Then shoot us a message on Whatsapp, WeChat or Gmail. We are available 24/7 to assist you.

whatsapp

Do not panic, you are at the right place

jb

Visit Our essay writting help page to get all the details and guidence on availing our assiatance service.

Get 20% Discount, Now
£19 £14/ Per Page
14 days delivery time

Our writting assistance service is undoubtedly one of the most affordable writting assistance services and we have highly qualified professionls to help you with your work. So what are you waiting for, click below to order now.

Get An Instant Quote

ORDER TODAY!

Our experts are ready to assist you, call us to get a free quote or order now to get succeed in your academics writing.

Get a Free Quote Order Now