The Measuring Information Systems Functional Performance

Print   

02 Nov 2017

Disclaimer:
This essay has been written and submitted by students and is not an example of our work. Please click this link to view samples of our professional work witten by our professional essay writers. Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of EssayCompany.

Zakariya Belkhamza

School of Business and Economics

Universiti Malaysia Sabah

ABSTRACT

With the proliferation of the Internet and World Wide Web applications, people are increasingly interacting with government to citizen (G2C) eGovernment systems. It is therefore important to measure the success of G2C eGovernment systems from the citizen's perspective. While general information systems (IS) success models have received much attention from researchers, few studies have been conducted to assess the success of eGovernment systems. The extent to which traditional IS success models can be extended to investigating eGovernment systems success remains unclear. This study provides the first empirical test of an adaptation of DeLone and McLean's IS success model in the context of G2C eGovernment. The model consists of six dimensions: information quality, system quality, service quality, use, user satisfaction, and perceived net benefit. Structural equation modeling techniques are applied to data collected by questionnaire from 119 users of G2C eGovernment systems in Taiwan. Except for the link from system quality to use, the hypothesized relationships between the

six success variables are significantly or marginally supported by the data. The findings provide several important implications for eGovernment research and practice. This paper concludes by discussing limitations of the study which should be addressed in future research.

Keywords

Information systems success, information effectiveness,

INTRODUCTION

Previous studies suggest that information systems projects have lower success rates than other technical projects (Barros et al., 2004; Poon and Wagner, 2001). The number of unsuccessful information systems projects is over the number of successful ones. However, success is not depending to just one issue. Complex relations of interdependence exist between information systems and organization. As an example, reducing costs in an organization cannot be derived solely from information systems implementation. Information systems success is hard to assess because it represent a vague topic that does not easily lend itself to direct measurement (DeLone and McLean, 1992).

The contribution of information systems-based assets to organizational performance provides a benchmark from which the many processes of the information systems function, including business information system, can be evaluated and refined. Without the benefit of these measures, information systems assets may be undervalued by users and/or top executives resulting in curtailed budget allocations and lower managerial profiles for top information systems executives. In other instances, the absence of reliable performance metrics may cause users and/or top managers to overvalue information systems assets. Users and strategic planners may therefore be unaware of innovations adopted by competing organizations that are enhancing and/or changing their patterns of work and competition. The lack of validated and complete performance criteria in either of the two instances can result in misguided decisions regarding the acquisition, design, and delivery of information systems.

Theoretical Background

The fundamental aim of an information system in an organization is to improve individual-making performance, and ultimately organizational effectiveness. The difficulty in empirically assessing the information system effectiveness let many researchers to adopt surrogate constructs that are more easily measurable (Raymond, 1990). Two main approaches for evaluating information systems success, the first one is behavioral and focuses on systems usage measured by user behaviors such as offline and online usage, which are not necessary related (Ein-Dor and Segev, 1978, Srinivasan, 1985). The second approach is on user attitude, which focus on assessing user satisfaction with various aspects of an information system (Srinivasan and Kaiser, 1987).

This study however, is implementing a new approach which seems to be more appropriate for the assessment of the information systems in the organizational context. This approach is based on the organizational information system manager’s perception, as a user of the performance for all of the aspects of the information function experienced within the organization (Chang and King, 2005). As mentioned earlier, operationalization of information systems success for this study is based on the guideline of Cameron and Whetton (1983).

The work of Chang and King (2005) was in response to problems plaguing organizational effectiveness research by (Steers, 1975). They noted that the notion of information systems function includes all information systems groups and departments within organization. The information systems functions use resources to produce information systems performance, which in turn influences business process effectiveness and organizational performance.

The managers’ perception of information systems activities derive from the use of information systems products and services provided by the information systems function, which is an antecedent of the information systems implementation in the organization. The operationalization of information systems success is based on three dimensions of information systems put forward by Chang and King (2005) which are systems performance, information effectiveness, and service performance. Their process of developing the instrument to measure the three constructs of systems performance, information effectiveness and service performance was a result of Q-sorting analysis containing various constructs (Chang and King, 2005). This result of their Q-sort analysis is described in table 4.4.

System Performance

Information Effectiveness

Service Performance

Effect on job

Effect on external constituencies

Effect on internal processes

Effect on knowledge and learning

System feature

Ease of use

Intrinsic quality of information

Contextual quality of information

Presentational quality of information

Accessibility of information

Reliability of information

Flexibility of information

Usefulness of information

Responsiveness

Reliability

Service provider quality

Empathy

Training

Flexibility of Service

Cost\benefit of service

Table 4.4: Sub-constructs of Information Systems Success (Chang and King, 2005)

System Performance

Systems refer to the set that encompass all information systems applications that the user regularly uses. This construct assesses the quality aspects of the system such as reliability, response time, ease of use, and the various impacts that a system has on the user's work (Chang and King, 2005). Chang and King developed the Q-sort of this construct in two ways: They reviewed the measures of empirical studies listed under the categories of systems quality and individual impact in the DeLone and McLean information systems success, and the instruments developed by Baroudi and Orlikowski (1988), Doll and Torkzadeh (1988), Davis (1989), Mirani and King (1994), Goodhue and Thompson (1995), Saarinen (1996). From the above Q-sort, 6 items were developed and hypothesized to assess systems performance. These items are listed on the following table 4.5.

Items

Constructs of Systems performance

The system used in my organization improve our job performance

The system used in my organization have positive influence on our organizations external partners/customers

The system used in my organization enhance internal processes in our organization

The system used in my organization enhance the learning process of our staff

The characteristics of the system and its usage and interface are easy to use, and easy to operate

The system used in my organization has good intrinsic quality

Table 4.5: Measurement of the Construct of System Performance

Information Effectiveness

Information is the set generated from any of the system that the user makes use of. This construct assesses the quality of information in terms of the design, operation, use, and value provided by information as well as the effects of the information on the user's job. The information can be generated from any of the systems used by the user (Wand and Wang, 1996). In their Q-sort, Chang and King (2005) used a comprehensive instrument developed by Wang and Strong (1996), in which it encompasses all measures included in the information quality constructs mentioned in DeLone and McLean information systems success, in addition to some new developed items (Chang and King, 2005). For this study, 7 items were selected and are illustrated in table 4.6:

Items

Construct of Information Effectiveness

The information generated by the system has good intrinsic quality

The information generated by the system is very reliable

The information generated by the system has good contextual quality

The information generated by the system has good presentational quality

The information generated by the system is very accessible

The information generated by the system is very flexible

The information generated by the system is very useful after usage

Table 4.6: Measurement of the Construct of Information Effectiveness

Service Performance

This construct assess the user’s experience with service provided by the information system function in terms of quality and flexibility. The service provided by the information systems function includes activities related to system development (Fitzgerald et al., 1993; Chang and King, 2005). The Q-sort of Chang and King (2005) includes IS-SERVQUAL instrument developed by Pitt et al. (1995), in addition to other items developed by Fitzgerald et al. (1993). The Q-sort also includes new aspects of information system function performance such as ERP, knowledge management and electronic business. For this study, 4 items were selected based on the recommendation of Chang and King (2005). They are:

Items

Construct of Service Performance

The service provided by the information system is very responsive

The service provided by the information system has good intrinsic quality

The service provided by the information system has good interpersonal quality

The service provided by the information system is very flexible

Table 4.7: Measurement of the Construct of Service Performance

Population and Sampling Method

Multimedia Super Corridor status companies were selected for this purpose. The Malaysian government has embarked on a bold move by developing the Multimedia Super Corridor (MSC), launched on 27 June 1996 (MDC, 2008). The development of technological parks such as the Singapore Science Park, the Kanagawa Science Park in Japan and the Silicon Valley in California have been made in line with the respective government’s plan to spur knowledge-intensive activities and boost the technological advancement of the nation.

The MSC project is part of Malaysia’s long-term plan to become a fully developed nation and knowledge-rich society by the year 2020 (MDC, 2008). It is also meant to take Malaysia’s development through the creation of an ideal information technological environment for world-class companies to use as a regional hub (MDC, 2008). The MSC comprises several administrative, industrial and technological development clusters. Such as Putrajaya, the newly built seat of the federal government, and Cyberjaya, an intelligent city which houses IT industries as well as research centers and the Multimedia University, and (3) Technology Park Malaysia, which is a technology park located in the centre of the MSC providing engineering and IT facilities to entrepreneurs, investors and industries (MDC, 2008). By providing the infrastructure and the necessary environment that encourages innovation and creativity, Malaysia is paving the way to be a platform for growth and advancement as well as a leader in IT. The MSC is developed specifically to explore the frontiers of information and multimedia technologies, revealing its full potential through the creation and implementation of cyber laws, cutting-edge technologies and excellent infrastructure.

According to MDC (2008), there are over 2173 approved MSC-status companies. Out of these, the Multimedia Development Corporation classifies 87 of them as being world-class companies. Companies seeking MSC status and eligibility for incentives will need to fulfill the following criteria:

They must be a provider of, or a heavy user multimedia products or services;

They must employ sustainable number of knowledge workers,

Provide technology transfer and/or knowledge to Malaysia, or otherwise contribute to development of the MSC, or support K-economy initiatives.

They should not be engaged in non-qualifying activities such as manufacturing, trading and consultancy

Successful companies must observe the conditions attached to the MSC-status recognition. These companies enjoy a set of incentives and benefits from the Malaysian government backed by the ten Bill of Guarantees. The MSC-status companies are contributing about 19.22% of the total IT workforce in Malaysia, with more than 89% of staff by MSC Malaysia status companies are categorized as knowledge workers holding high-value jobs. 57% of employees of MSC Malaysia status companies have at least a first degree.

As it can be seen from the eligibility criteria, companies are strictly complied with a good atmosphere to perceive a healthier information systems implementation in conducting their business activities. Together with the knowledgeable manpower, a good combination of IT application and systems in their business activities and organizational climate is well maintained and uphold to make these organizations an ideal population to test the proposed model.

For this type of population, a complex probability sampling technique was adopted. The proportional stratified random sampling was best appropriate for this frame as the MSC status companies are classified according to their business activities functions. Stratified random sampling is a sampling technique in which the population is divided into subgroups or strata whereby each element of the population lied in exactly one stratum; samples are taken randomly within each stratum (Davis and Yen, 1999). The proportional technique for the random selection from the strata consists of selecting samples from each stratum following the proportion of that stratum to the whole population. For instance, the technological cluster of Application Software consists of 963 companies out of 2173. Following the proportional technique, sample from this stratum should comprise 40% of 963.

Technological cluster

Number of companies

Sample Distribution (Proportion)

Sample Selected

1

Application Software (AS)

963

47%

452

2

Mobility, Embedded Software and Hardware (MeSH)

440

21%

92

3

Shared Services & Outsourcing (SSO)

164

8%

13

4

Creative Multimedia Companies (CMC)

235

11%

25

5

Internet-based Business (IBB)

282

13%

36

Total

2084

618

With the exception of companies under the sixth technology cluster; which will be excluded from the study because of their non-business nature; the five technology clusters will represent the five strata of the sampling, which will make the final population number 2084 companies. Following the stratified proportion technique, the final sample drawn is 618 organizations. Table 4.9 demonstrates the distribution of drawn sample in each stratum.

Questionnaire Design

The questionnaire used for data collection contains items to measure the information systems success constructs, namely system performance, information effectiveness and service performance. These constructs have been operationalized and reported in section 4.3.3. Table 4.12 summaries the constructs of organizational ambidexterity and the number of items.

Information systems success dimensions

Number of items

References

1

System performance

6 items

Wand and Wang (1996)

DeLone and McLean (2003)

Chang and King (2005)

Pitt et al. (1995)

2

Information effectiveness

7 items

3

Service performance

4 items

Table 4.12: Information Systems Success Constructs and Number of Items

For the information systems success constructs, it is important to highlight that caution should be taken when choosing the measurement scale (Church and Waclawski, 1989). Chang and King (2005) have chosen a measurement scale of 1= hardly at all, to 5= to a great extent, with 0 =not applicable. This scale seems to be misleading because it appears to be a bipolar rather than a unipolar. Hardly at all might be taken to be a negative point of a bipolar scale, which does not necessarily assess the success of the information system. Measuring the information systems success from a managerial perspective required respondents to rank the systems behavior, not their own behaviors. This systems behavior can be attributed to respondents as their attitude towards systems performance [1] . Assessing the success of an information system requires producing a scale that represents a positive attribute in the respondent’s mind, which is a success in this case. The bipolar scale prompts a respondent to balance two opposite attributes in mind: success and failure. This is not only beyond the measurement of the items, but it will also produce a bias data that would affect the construct validity. Therefore, it is suggested to use a unipolar scale for success measurement, in which it prompts the respondent to think of the presence or absence of the systems success. The items measured using the scale: 1= to no extent, 2= to a little extent, 3= to some extent, 4= to a great extent, 5= to very great extent.

The last section of the questionnaire consists of demographical questions on the respondent.

Data Collection Methods

Although surveys are a type of research method widely used in the information systems research (Galliers, 1992), a challenging and proactive aspect in this study was the use of a web-based survey. It is therefore very important to follow the most acceptable procedures in order to ensure the quality of the data collected, validity, and reliability of the instruments. To recall, the instruments previously presented were developed based on: 1) the conceptual foundation discussed in the literature review, 2) the insights obtained throughout focus groups with academics and expertise, and 3) the procedures to assure the validity and reliability of the data and instruments. Each instrument was reviewed in terms of completion time, to determine areas of confusion, and to assure valid responses to the survey.

The data used for analyzing the priori model were collected using a web-based survey application called "Winsurvey" [2] . This software application is powerful tool used to design the questionnaire, publish it in a website and distribute it to the appropriate e-mails of the sample. Although this software is not commonly used in the academia, the power of the software and the control it holds make it an appropriate tool for data collection.

The electronic version of the questionnaire has been inserted in a software project, which made four parts in four HTML Webpages. The first part contains the organizational context dimensions items, the second page contains organizational ambidexterity items, and the third page contains information systems success items. The last page consists of demographical questions. Each page has a "next" button which lead to the next page. A general outlook of the software pages are shown in Appendix.

Before sending the questionnaire, a database containing names, e-mails, addresses of the MSc-status companies has been created. It has to make sure that all e-mails inserted in the database were of IT managers, CIOs, or executive level IT personnel available in the list of the MSC-companies.

The major advantage of this software is that it associates a unique ID number to each case (i.e. organization) which represents the unique identity of that organization before sending the questionnaire and after receiving the response. The second advantage is that the software prevents the respondent from replying the questionnaire more than once. These two functions will allow full control of the data and prevent the bias in the respondent.

The questionnaire was attached with an invitation letter to brief the respondent on the study. The letter also assures the potential respondents about the benefits of the survey for the academic as well as for the industry, and the anonymity and confidentiality of their responses. A sample of the letter is presented in Appendix.

After sending the questionnaire to the companies, the software will record the time and date of sending the questionnaire. The companies will receive an e-mail containing the invitation letter and the link of the website address where the questionnaire is uploaded. Each link holds the unique ID number of that particular organization to keep track on the respondent. When the respondent reaches the final page of the questionnaire and clicks the button "send", the data of that respondent will be uploaded to a MySQL database created in the server. This data will be retrieved to the local server and stored in the local machine as a MySQL database with the software interface. Each data of every respondent will hold the same ID number, which enable to track every respondent.

RESULTS

The Measurement Model

As mentioned above, the test of the research model includes two stages: the estimation of the measurement model and the estimation of the structural model. In the estimation of the measurement model, the psychometric properties of the measures are evaluated in terms of reliability and validity. The estimation of the structural model involves the assessment of the path analysis of the theorized model.

Three tests were computed to assess the reliability, Cronbach alpha, composite reliability coefficient and average variance extracted (AVE). To recall, Nunnaly (1978) suggested that both Coefficient alpha should be used to assess the quality of the instruments because it is loaded with meaning and the square root of coefficient alpha is the estimated correlation of the k-item test with errorless true scores. Hair et al. (2010) suggested that both Cronabch alpha and composite coefficient should be equal of greater than 0.70 to represent good reliability, whereby in exploratory studies, a value of .60 may also be accepted (Hair et al., 2010). As for AVE, a level greater than .50 is considered acceptable (Chin, 1998; Chang and King, 2005).

Reliability of Information Systems Success Constructs

Table 6.6 bellow shows the service performance possesses a coefficient level of .75; information effectiveness has coefficient level of .73, whereby system performance has a .86 coefficient level. Composite reliability coefficients are also above .80 for the three information systems success constructs. AVE measures for the three constructs of information systems success are also above the acceptance level. These results indicate that all constructs that measure the information systems success has good reliability level.

Constructs

Internal Consistency

(Cronbach’s alpha)

Composite reliability coefficient

AVE

Service performance

.75

.87

.67

Information effectiveness

.73

.83

.56

System performance

.86

.90

.71

Table 6.6: Reliability Test for Information Systems Success Constructs

Construct Validity

Churchill (1979) argues that construct validity is most directly related to the question of what the instrument is in fact measuring. To assess construct validity, item loadings for each factor were performed.

The factor loadings of information systems success constructs are shown in table 6.9. There was problem with one item (SYSPR3) in system performance construct, which loaded very weakly (.114) to its constructs [3] . Although system performance construct is only represented by 3 items, the factor loadings of the three items as high as .90, which make the construct well represented. The factor loading for the items of information effectiveness items range from .73 to .76. For service performance items, all of them have high significant factor loadings range from .70 to .94. Therefore, all items of information system success have significant factor loading in their respective constructs, which means good convergent validity.

Construct

Items

Factor Loadings

AVE

System performance

SYSPR1

SYSPR2

SYSPR5

.93

.94

.96

.67

Information Effectiveness

INFOE1

INFOE2

INFOE3

INFOE4

.74

.76

.73

.74

.56

Service performance

SERVP1

SERVP2

SERVP3

SERVP4

.93

.94

.75

.70

.71

Table 6.9: Factor Loadings for Information System Success Items

The Structural Model

The structural model, the second important stage in structural equation modeling, allows the illustration of hypothesised relationships among the latent variables. As a result, the structural model will indicate the extent to which an a priori hypothesized relationship is supported by the data.

In developing the structural model, it is important in structural equation modelling to identify whether each of the constructs in the model is reflective or formative. Jarvis et al. (2003) highlight that researchers often miss-specify formative constructs as reflective which bring errors in structural model. They note that the decision to model a construct as formative or reflective should be based on: 1) direction of causality from construct to indicators; 2) interchangeability of indicators; 3) co-variation among indicators; 4) nomological net of construct indicators. Constructs are modeled as formative if the direction of causality is from indicators to constructs, indicators need not be interchangeable, indicators need not co-vary, and the nomological net of indicators can differ. They are modeled as reflective if the opposite conditions are applicable. According to Kock (2010), a reflective latent variable is one in which all the indicators are expected to be highly correlated with the latent variable score. A formative latent variable is one in which the indicators are expected to measure certain attributes of the latent variable, but the indicators are not expected to be highly correlated with the latent variable score, because they are not expected to be correlated with each other.

Based on these guidelines, each of the three second-order constructs of information systems success (i.e. systems performance, information effectiveness, service performance) are modeled as formative. In the case of information systems success, system performance need not necessarily be accompanied by higher levels of information effectiveness or service performance. This means that there is no theoretical rationale for them to necessarily occur together.

Conclusion

It is important that you write for a general audience. It is also important that your work is presented in a professional fashion. This guideline is intended to help you achieve that goal. By adhering to the guideline, you also help the conference organizers tremendously in reducing our workload and ensuring impressive presentation of your conference paper. We thank you very much for your cooperation and look forward to receiving your nice looking, camera-ready version!

ACKNOWLEDGMENTS (Optional)

We thank all authors, committee members, and volunteers for their hard work and contributions to the conference. The layout of this format and the content were adapted for a previous AMCIS proceedings template. The references cited in this paper are included for illustrative purposes only.



rev

Our Service Portfolio

jb

Want To Place An Order Quickly?

Then shoot us a message on Whatsapp, WeChat or Gmail. We are available 24/7 to assist you.

whatsapp

Do not panic, you are at the right place

jb

Visit Our essay writting help page to get all the details and guidence on availing our assiatance service.

Get 20% Discount, Now
£19 £14/ Per Page
14 days delivery time

Our writting assistance service is undoubtedly one of the most affordable writting assistance services and we have highly qualified professionls to help you with your work. So what are you waiting for, click below to order now.

Get An Instant Quote

ORDER TODAY!

Our experts are ready to assist you, call us to get a free quote or order now to get succeed in your academics writing.

Get a Free Quote Order Now