Cloud information accountability. An overview


Etude Scientifique, 2017

69 Pages, Note: 1,5


Extrait


Table of contents

Table of figures

Table of tables

List of abbreviations

Cloud information accountability: an overview

Abstract

1. Introduction

2. System analysis
2.1 Scope
2.3. Preliminary investigation

3. Materials and Methods
3.1 Existing system
3.2 Proposed system
3.3 Feasibility of the study
3.4 Requirement and specifications
3.5 About Microsoft Visual Studio 2015
3.6 Selection of operating system
3.7 System design
3.8 Input design
3.9 Use and design of modules
3.10 Output design
3.11 Database design
3.12 Software testing
3.13 Implementation
3.14 Limitations
3.15 Future scope

4. Conclusions

Acknowledgements

References

ACKNOWLEDGEMENT

Firstly we thank God Almighty whose blessing were always with us and helped us to complete this research work successfully.

We are extremely grateful to Prakash Joseph (Head of the Department, Computer Science) for the valuable suggestions, support and encouragements. We wish to thank our beloved Manager Rev. Fr. Dr. George Njarakunnel, Respected Principal Dr. Joseph V. J, Bursar Shaji Augustine, Vice Principal Fr. Joseph Allencheril, and the Management for providing all the necessary facilities in carrying out the study.

We lovingly and gratefully indebted to our teachers, parents, siblings and friends who were there always for helping us in this project.

Prem Jose Vazhacharickal*, Sunil K Joseph and Abhiram Vijayan

*Address for correspondence

Assistant Professor

Department of Biotechnology

Mar Augusthinose College

Ramapuram-686576

Kerala, India

premjosev@gmail.com

Table of figures

Figure 1. Contex level diagram; Level_0 DFD

Figure 2. Top Level DFD for user

Figure 3. Level_2 DFD for user process 4; file upload.

Figure 4. Level_2 DFD for user process 5; file upload.

Figure 5. Level_2 DFD for user process 6; verified files.

Figure 6. Top level DFD for TPA.

Figure 7. Top level DFD for Admin.

Figure 8. Home page of the developed owner registration

Figure 9. Details of owner sign in page.

Figure 10. Details of owner home page.

Figure 11. Details of File upload option in the web page.

Figure 12. Details of File block option in the web page.

Figure 13. Details of TPA sign in page.

Figure 14. Details of TPA home page.

Figure 15. Details of TPA sign in for cloud1.

Figure 16. Details of TPA cloud 1 home.

Figure 17. Details of the TPA cloud to be verified.

Figure 18. Details of the TPA file block

Figure 19. Details of the TPA file decrypt

Figure 20. Details of the TPA sign in for cloud 2.

Figure 21. Details of the TPA cloud 2 files to be verified.

Figure 22. Details of the TPA file block.

Figure 23. Details of the TPA file block decrypt.

Figure 24. Details of the TPA sign in for cloud 3.

Figure 25. Details of the TPA cloud 3 files to be verified.

Figure 26. Details of the TPA file block decrypt.

Figure 27. Details of the owner verified files.

Figure 28. Details of the owner verified file download.

Table of tables

Table 1. Table details (Filearchieve) and creation parameters.

Table 2. Table details (Filemetadata) and creation parameters.

Table 3. Table details (Ownerregistration) and creation parameters.

Table 4. Table details (Fileindex) and creation parameters.

Table 5. Table details (FileVerify_Temp) and creation parameters.

Table 6. Table details (Login) and creation parameters.

Table 7. Table details (TPALogin) and creation parameters.

Table 8. Table details (KeyGenerate) and creation parameters.

List of abbreviations

illustration not visible in this excerpt

Cloud information accountability: an overview

Prem Jose Vazhacharickal1*, Sunil K Joseph2, Abhiram Vijayan2 and Geethu Thomas3

* premjosev@gmail.com

1Department of Biotechnology, Mar Augusthinose College, Ramapuram, Kerala, India-686576

2Department of Computer Science, Mar Augusthinose College, Ramapuram, Kerala, India-686576

3Higher Secondary Division, Sacred Heart English Medium Higher Secondary, Moolamattom, Kerala, India-685589

Abstract

Provable data possession (PDP) is a technique for ensuring the integrity of data in storage outsourcing. In this scheme, we address the construction of an efficient PDP scheme for distributed cloud storage to support the scalability of service and data migration, in which we consider the existence of multiple cloud service providers to cooperatively store and maintain the clients’ data. We present a cooperative PDP (CPDP) scheme based on homomorphic verifiable response and hash index hierarchy. We prove the security of our scheme based on multi-prover zero-knowledge proof system, which can satisfy completeness, knowledge soundness, and zero-knowledge properties. In addition, we articulate performance optimization mechanisms for our scheme, and in particular present an efficient method for selecting optimal parameter values to minimize the computation costs of clients and storage service providers. Our experiments show that our solution introduces lower computation and communication overheads in comparison with non-cooperative approaches. Using MR-PDP to store treplicas is computationally much more efficient than using a single-replica PDP scheme to store t separate, unrelated files (e.g., by encrypting each file separately prior to storing it). Another advantage of MR-PDP is that it can generate further replicas on demand, at little expense, when some of the existing replicas fail. The generation of replicas is on demand by the user’s request that is based on the security choice selected by the user at the time of file upload. The user can choose three options Low, Medium, High at the time of file upload. The uploaded file is divided in to N blocks of different sizes to achieve the efficiency in storage and is also used to improve security, here N represent the number of clouds we are using. Low means the file is divided into N blocks (here 3), and each block is stored in N different location of the single cloud. Medium means the file is divided into N blocks and each block is stored in N different clouds which improves the security of data but reduce the availability. High means the file is divided into N blocks and each N block is stored in N different clouds that are we are keeping the replicas of file in N different clouds. The system maintains a download count to dynamically create the replicas in accordance with the users demand.

Keywords: Data flow diagram, Databases, Cloud computing, Security, C++.

1. Introduction

Provable data possession (PDP) is a technique for ensuring the integrity of data in storage outsourcing. In this scheme, we address the construction of an efficient PDP scheme for distributed cloud storage to support the scalability of service and data migration, in which we consider the existence of multiple cloud service providers to cooperatively store and maintain the clients’ data (Zhu et al., 2012; Zhu et al., 2011; Wei et al., 2014; Juels and Oprea, 2013; Barsoum and Hasan, 2010). We present a cooperative PDP (CPDP) scheme based on homomorphic verifiable response and hash index hierarchy. We prove the security of our scheme based on multi-prover zero-knowledge proof system, which can satisfy completeness, knowledge soundness, and zero-knowledge properties. In addition, we articulate performance optimization mechanisms for our scheme, and in particular present an efficient method for selecting optimal parameter values to minimize the computation costs of clients and storage service providers. Our experiments show that our solution introduces lower computation and communication overheads in comparison with non-cooperative approaches.

Many storage systems rely on replication to increase the availability and durability of data on untrusted storage systems. At present, such storage systems provide no strong evidence that multiple copies of the data are actually stored. Storage servers can collude to make it look like they are storing many copies of the data, whereas in reality they only store a single copy (Curtmola et al., 2008; Bessani et al., 2013; Barsoum and Hasan, 2010; Shraer et al., 2010). We address this short coming through multiple-replica provable data possession (MR-PDP): A provably-secure scheme that allows a client that stores treplicas of a file in a storage system to verify through a challenge-response protocol that each unique replica can be produced at the time of the challenge and that the storage system uses t times the storage required to store a single replica. MR-PDP extends previous work on data possession proofs for a single copy of a file in a client/server storage system.

Using MR-PDP to store treplicas is computationally much more efficient than using a single-replica PDP scheme to store t separate, unrelated files (e.g., by encrypting each file separately prior to storing it). Another advantage of MR-PDP is that it can generate further replicas on demand, at little expense, when some of the existing replicas fail (Curtmola et al., 2008; Joseph, 2014; Singh and Padmavathi, 2015). The generation of replicas is on demand by the user’s request that is based on the security choice selected by the user at the time of file upload. The user can choose three options Low, Medium, High at the time of file upload. The uploaded file is divided in to N blocks of different sizes to achieve the efficiency in storage and is also used to improve security, here N represent the number of clouds we are using. Low means the file is divided into N blocks (here 3), and each block is stored in N different location of the single cloud. Medium means the file is divided into N blocks and each block is stored in N different clouds which improves the security of data but reduce the availability. High means the file is divided into N blocks and each N block is stored in N different clouds that are we are keeping the replicas of file in N different clouds. The system maintains a download count to dynamically create the replicas in accordance with the users demand.

The system which consists of three users namely User who have the access rights to upload, download and delete file, TPA (Third Party Auditor) who verifies the files that are uploaded by the registered user and the user can download the file only after this verification, Admin who own the system and who have the full access right, can create or delete TPAs and can view the uploaded files and details about the uploads. A single cloud can have different TPA’s and the work load is divided among by using the random function to select the corresponding files from the cloud. The creation and deletion of TPA is based on the work load and efficiency of TPA which is monitored by the administrator. The data uploaded by the user is temporarily stored in an encrypted form by using the homomorphic encryption algorithm. We can use any kind of encryption algorithms along with this applications but it is better to choose a zero knowledge proof algorithm. This uses an encryption key which is automatically supplied to the user at the time of file upload. The data is stored in cloud only after it is verified by TPA. The actual storage of data is in an encrypted form called Meta Data, which ensures additional security measure for the cloud data. The user gets the original file when he/she downloads the needed file from the cloud storage, which ensures the integrity of data. The user is unaware of the background processes. This system reduces the overload of admin by creating TPAs. The TPA can be of any number for each cloud depending on the number of clouds we are using.

2. System analysis

System Analysis is the application of the systems approach to problem solving using computers. The ingredients are system elements, processes and technology. This means that to do systems work, one need to understand the system concept and how organizations operate as a system, and then design appropriate computer based systems that will meet an organization’s requirements. It is actually a customized approach to the use of computer for problem solving.

2.1 Scope

The scope of computerization in any field is increasing speed, accuracy and storage capacity are the factors supporting the field. Here in this project we are implementing an environment for that enables different users can access these services who have a valid Username and Password.

2.3. Preliminary investigation

Initial investigation is the activity that determines whether the user’s requisition is valid and feasible. The first step in initial investigation is the problem definition. It includes the identification of the problem to be solved, for the task to be accomplished and the system goals to be achieved.

3. Materials and Methods

3.1 Existing system

There exist various tools and technologies for multicloud, such as Platform VM Orchestrator and Ovirt. These tools help cloud providers construct a distributed cloud storage platform for managing clients’ data. However, if such an important platform is vulnerable to security attacks, it would bring irretrievable losses to the clients. For example, the confidential data in an enterprise may be illegally accessed through a remote interface provided by a multi-cloud, or relevant data and archives may be lost or tampered with when they are stored into an uncertain storage pool outside the enterprise. Therefore, it is indispensable for cloud service providers to provide security techniques for managing their storage services. More Moreover another limitations of the existing system is that it is not suitable for multicloud storage services.

To check the availability and integrity of outsourced data in cloud storages, researchers have proposed two basic approaches called Provable Data Possession and Proofs of Retrievability. Ateniese et al. (2007) first proposed the PDP model for ensuring possession of files on untrusted storages and provided an RSA (Ron Rivest, Adi Shamir, and Leonard Adleman)-based scheme for a static case that achieves the communication cost. They also proposed a publicly verifiable version, which allows anyone, not just the owner, to challenge the server for data possession. They proposed a lightweight PDP scheme based on cryptographic hash function and symmetric key encryption, but the servers can deceive the owners by using previous metadata or responses due to the lack of randomness in the challenges. The numbers of updates and challenges are limited and fixed in advance and users cannot perform block insertions anywhere.

Disadvantages of existing system:-

- In the Existing System, it is indispensable for cloud service providers to provide security techniques for managing their storage services.
- Moreover another limitations of the existing system is that it is not suitable for multicloud storage services

3.2 Proposed system

We present a cooperative PDP (CPDP) scheme based on homomorphic verifiable response and hash index hierarchy. We prove the security of our scheme based on multi-prover zero-knowledge proof system, which can satisfy completeness, knowledge soundness, and zero-knowledge properties. In addition, we articulate performance optimization mechanisms for our scheme, and in particular present an efficient method for selecting optimal parameter values to minimize the computation costs of clients and storage service

The advantages of proposed system are:

- An integrated, highly secure and high capacity databases to store the information
- Fast and easy access to data
- The new system is more user-friendly and flexible
- Mailing facility is available
- Security of the system is ensured and data is protected from unauthorized users
- Support for multiple users is also ensured
- Avoid duplication of records

3.3 Feasibility of the study

The objective of a feasibility study is to test the technical, social and economic feasibility of developing a computer system. This is done by investigating the existing system and generating ideas about a new system. The computer system must be evaluated from a technical viewpoint first, and if technically feasible, their impact on the organization and the staff must be accessed. If a compatible, social and technical system can be devised, then it must be tested for economic feasibility. The 3 important tests for feasibility are studied and described below

- Operational feasibility
- Technical feasibility
- Economic feasibility

3.3.1 Operational feasibility

Proposed projects are beneficial only if they can be turned into information systems that will meet the operating requirements of the organization. The test of feasibility asks if the system will work when it is developed and installed.

Some of the important questions that are useful to test the operational feasibility of a project are given below

- Is there sufficient support for the project from the management? From users? If the present system is well liked and used to the extent that people would not be able to see reasons for a change, there may be a resistance
- Are current methods acceptable to the users? If they are not, users may welcome a change that will bring about a more operational and useful system.
- Have the users been involved in the planning and development of the project, and then the changes of resistance can be possibly reduced.
- Issues that appear to be quite minor at the early stage can grow into major problems after implementation.

3.3.2 Technical feasibility

The assessment of technical feasibility must be based on the outline of the system requirements in terms of inputs, outputs, files, programs, procedures and staff. This can be quantified in terms of volumes of data, trends and frequency of updating. Having identified an outline system, the investigator must go on to suggest the type of equipment required, methods of developing the system and methods of running the system.

With regard to the processing facilities, the feasibility study will need to consider the possibility of using a bureau or, if in-house equipment is available, the nature of the hardware to be used for data collection, storage, output and processing. There are number of technical issues, which are generally raised during the feasibility stage of the investigation. They are as follows

- Does a necessary technology exist to do what is suggested?
- Does the proposed equipment have the capacity to hold the data required to use the new system?
- Can the system be upgraded if developed?
- Are there technical guarantees of accuracy, reliability, ease of access and security?

3.3.3 Economic feasibility

A system that can be developed technically and that will be used if installed must still be profitable for the organization. Financial benefits must equal or exceed the costs. Justification for any outlay is that it will increase profit and reduce expenditure.

3.4 Requirement and specifications

The software requirement specification is produced at the culmination of the analysis task. The function and performance allocated to the software as the part of the system engineering are refined by establishing a complete information description, a detailed functional description a representation of the system behaviour, an indication of the performance requirements and the design constraints, appropriate criteria, and other information pertinent to the requirements.

The introduction of the software requirements specification states the goals and objectives of the software, describing the content of the computer based system. The Information description provides a detailed description of the problem that the software must solve. Information content, flow and structure are documented. Hardware, software and Human interfaces are described for the external system elements and internal software functions (Pressmann and Ince, 2000).

A description of each function required to solve the problem is presented in the functional description processing narrative is provided for each function, design constraints are stated and justified, performance characteristics are stated and one or more diagrams are included to graphically represent the overall structure of the software and interplay among software functions and other system elements.

The behavioural Description section of the specification examines the operation of the software as consequences of external events and internally generated control characteristics. Validation criteria are probably the most important and, ironically the most neglected section of the software requirements specification (Pressmann and Ince, 2000).

Specification of validation criteria acts as an implicit review of all other requirements.

Both the developer and the customer conduct a review of the software requirements specification. Because the specification forms the foundation of the development phase, extreme core should be taken in conducting the review. The review is first conducted at a macroscopic level; that is, reviewers attempt to ensure that the specification is complete, consistent and the accurate when the overall information, functional and the behavioural domains are considered. Once the review is complete, both the customer and the developer sign off the software requirement specification. The specification becomes a “contact” for the software development (Pressmann and Ince, 2000).

3.4.1 Hardware specifications

Processor : Intel Core 2 Duo

RAM : 4 GB of RAM

Hard Disk Drive : 320 GB of available Hard Disk space

Keyboard : Standard 108 keys

Monitor : Display panel (14 inches)

Pointing device : Mouse

Connectivity : Local intranet or internet

3.4.2 Software specifications

Technology Used :.NET Framework 4.5

IDE : Visual Studio 2015

Front end : ASP.Net with C# code behind

Back end : MS SQL Server 2016 R2

Operating System : Windows7

3.5 About Microsoft Visual Studio 2015

Microsoft Visual Studio is an integrated development environment (IDE) from Microsoft. It can be used to develop console and graphical user interface applications along with Windows Forms applications, web sites, web applications, and web services in both native code together with managed code for all platforms supported by Microsoft Windows, Windows Mobile, Windows CE, .NET Framework, .NET Compact Framework and Microsoft Silver light.

Visual Studio supports different programming languages by means of language services, which allow the code editor and debugger to support nearly any programming language, provided a language-specific service exists. Built-in languages include C/C++ (via Visual C++), VB.NET (via Visual Basic .NET), C# (via Visual C#), and F# (as of Visual Studio 2012). It also supports XML/XSLT, HTML/XHTML, JavaScript and cascading style sheet (CSS).

3.5.1 The .Net platform

The .NET Framework is Microsoft's comprehensive and consistent programming model for building applications that have visually stunning user experiences, seamless and secure communication, and the ability to model a range of business processes. The .NET Framework 4 works side by side with older Framework versions. Applications that are based on earlier versions of the Framework will continue to run on the version targeted by default.

3.5.2 The .Net frame work

The .NET Framework is a software framework that runs primarily on Microsoft Windows. It includes a large library and supports several programming languages which allow language interoperability (each language can use code written in other languages). Programs written for the .NET Framework execute in a software environment (as contrasted to hardware environment), known as the Common Language Runtime (CLR), an application virtual machine that provides important services such as security, memory management, and exception handling. The class library and the CLR together constitute the .NET Framework.

The .NET Framework's Base Class Library provides user interface, data access, database connectivity, cryptography, web application development, numeric algorithms, and network communications. Programmers produce software by combining their own source code with the .NET Framework and other libraries. The .NET Framework is intended to be used by most new applications created for the Windows platform. Microsoft also produces a popular integrated development environment largely for .NET software called Visual Studio.

The Microsoft .NET Framework 4 provides the following new features and improvements:

- Improvements in Common Language Runtime (CLR) and Base Class Library (BCL)
- For a comprehensive list of enhancements to CLR and BCL
- Innovations in the Visual Basic and C# languages, for example statement lambdas, implicit line continuations, dynamic dispatch, and named/optional parameters.
- Improvements in Data Access and Modelling
- Enhancements to ASP.NET

3.5.3 Features of ASP.NET

- Security: ASP.NET provides defaults authorization and authentication schemes for web applications.
- Compilation: All ASP.NET code, including scripts is compiled, which allows for performance optimizations, storing typing and early binding .Once the code has been compiled, the common language runtime further compiles ASP.NET to native code.
- Application event:ASP.NET allows us to include application level event handling code in the optical global.
- State facilities: ASP.NET also offers distributed state facilities. We can create multiple instances of the same application on one computer or on several computers.
- Deployment: ASP.NET configuration settings are stored in Xml based file, which are human readable and writable.
- Manageability: ASP.NET also supplies performance counters within the windows performance monitor. These counters can be used to monitor the performance of a single instance of an ASP.NET application.

No need to separately install the SQL Server 2008 while using the ASP.NET 2008.

3.5.4 Common language runtime (CLR)

Common Language Runtime (CLR) is the engine available in .Net Framework to compile and run the program. CLR engine does not compile the code in machine code but converts the code in a set of instructions called Microsoft Intermediate Language (MSIL). This MSIL is one of the section of Portable Executable (PE) file, the other being Metadata. PE file automatically get generated when you compile the program code.

The conversion of the program code to MSIL by CLR engine, makes .Net platform and language independent. Although at present, Microsoft does not have CLR engines for other platforms, in future you can find .Net application being compiled in UNIX or Linux operating system. After the conversion of the program code to MSIL, the code is then translated to native or machine code. Instead of compiling the program code at development time, the MSIL code gets translated ‘just in time’ (JIT) by JIT compilers.

The CLR supports component-based programming. Component development has numerous attractive benefits such as code reuse, proper maintenance of all components by allowing independent bug fixes to each. In addition, the CLR helps developers in managing both allocation and de-allocation of memory. This removes two of the largest sources of programmer error: leaks and memory corruption.

CLR is also helpful for security purposes. CLR provide permissions to a component based on what process it runs in, validates the code based on evidence such as information about code at load time and the website from which component was obtained to assign permissions on a component-by-component basis. Moreover, CLR checks the code to see if it has been manipulated. The metadata in a CLR component can contain a digital signature that can be used to verify that the component was written by genuine person and that it has not been modified. You can verily identify if anyone has tried to modify with the code.

3.5.5 Microsoft SQL server 2016

The Microsoft SQL Server 2008 provide following new features for database developers.

1. Increase the precision of storing and managing DATE and TIME information.
2. Store semi-structured and sparsely populated sets of data efficiently, using Sparse Columns.
3. New fully integrated Full-Text Indexes enable high-performance, scalable, and manageable Full-Text Indexing.
4. Create large User-Defined Types and User-Defined Aggregates greater than 8 KB.
5. Pass large amounts of data easily to functions or procedures using new Table-Value Parameters.
6. Perform multiple operations efficiently with the new MERGE command.
7. Model hierarchical data, such as org charts, or files and folders, using the new Hierarchy Id data type.
8. Build powerful location-aware applications, using SQL Server’s new standards-compliant spatial data types and spatial indexing capabilities.
9. Manage files and documents efficiently with full SQL Server security and transaction support, using the powerful new FILESTREAM data type.
10. Easily identify dependencies across objects and databases, using New Dependency Management.
11. Experience faster queries and reporting with Grouping Sets through powerful ANSI standards-compliant extensions to the GROUP BY clause
12. Experience efficient, high-performance data access, using new Filtered Indexes for subsets of data.

3.5.6 C# languages

C# is a general purpose programming object oriented language invented around 1999 or 2000 by Anders Hejlsberg at Microsoft. It is very similar to Java in its syntax with a major difference being that all variable types are descended from a common ancestor class. The purpose of C# is to precisely define a series of operations that a computer can perform to accomplish a task. Most of these operations involve manipulating numbers and text, but anything that the computer can physically do can be programmed in C#. Computers have no intelligence- they have to be told exactly what to do and this is defined by the programming language you use. Once programmed, they can repeat the steps as many times as you wish at very high speeds. Modern PCs are so fast they can count to a billion in a second or two.

Features

1. It has no global variables or functions. All methods and members must be declared within classes. Static members of public classes can substitute for global variables and functions.
2. Local variables cannot shadow variables of the enclosing block, unlike C and C++. Variable shadowing is often considered confusing by C++ texts.
3. C# supports a strict Boolean data type
4. Managed memory cannot be explicitly freed; instead, it is automatically garbage collected. Garbage collection addresses the problem of memory leaks by freeing the programmer of responsibility for releasing memory that is no longer needed.
5. In addition to the try...catch construct to handle exceptions, C# has a try...finally construct to guarantee execution of the code in the finally block.
6. Multiple in-heritance is not supported, although a class can implement any number of interfaces. This was a design decision by the language's lead architect to avoid complication and simplify architectural requirements throughout command line interface (CLI).
7. C#, like C++ (but unlike Java), supports operator overloading.
8. C# currently (as of version 4.0) has 77 reserved words

3.6 Selection of operating system

3.6.1 Windows 10: an overview

Windows 8 made the hugely controversial move to eliminate the Start Menu, opting instead for a don’t-call-it-Metro style Start Screen. It went over about as well as you’d expect. In Windows 10, however, the Start Menu is back. Now, Live Tiles live here just like regular app icons, in (relatively) perfect harmony.

3.6.2 Network data security

Network data can be protected on the wire or at the network interface. Securing data at the network requires a firewall to proxy services and mediate connections between the internal network, (LAN) and external network (Internet). This is the purpose of Proxy Server.

3.6.3 Internet protocol security

Internet Protocol Security (IPSec) is a framework of open standards for ensuring secure private communications over Internet Protocol networks, using cryptographic security services.

3.6.4 Microsoft edge web browser

Microsoft Edge is the default web browser in Windows 10 and it has been a year since it got released. It uses EdgeHTML as the web rendering engine and allows us to do more on the web with its built-in Cortana, reading tools and note-taking features. As a Whole Microsoft Edge provides more efficient, more secure, faster, productive and more Compatible experience to all Windows 10 users. Most awaited features like support for extensions, notifications from websites, pinning tabs and much more got added to Microsoft Edge in the latest Windows 10 version 1607.

3.6.5 Advantages of edge web browser

Microsoft

- When we start typing a frequently used web address in the address bar, a list of similar appears that you can choose from. And if a web page address is wrong, Edge can search for similar addresses to try to find a match.
- Install Extensions in Microsoft Edge Browser
- In the search bar type a word or phrase that describes what you are looking for.
- Go to other web pages similar to the one you are viewing without even doing a search. Just use the show related sites feature.
- Browse through the list of web pages you recently visited by clicking the history button on the tool bar.

3.7 System design

System design provides an understanding of the procedural details, necessary for implementing the system recommended in the feasibility study. Basically it is all about the creation of a new system. This is a critical phase since it decides the quality of the system and has a major impact on the testing and implementation phases.

System design consists of three major steps

- Drawing of the expanded system data flow charts to identify all the processing functions required.
- The allocation of the equipment and the software to be used.
- The identification of the test requirements for the system.

3.7.1 Characteristics of design

- A design should exhibit a hierarchical organization that makes intelligent use of control among components of the software.
- A design should be modular that is, the software should be logical.
- A design should contain distinct and separable representation of data and procedure.
- A design should lead to interface that reduce the complexity of the connections between modules and with the external environment

3.7.2 Design of the proposed system

In design phase, detailed design of the system is carried out and a user oriented performance specification is convenient for a technical design specification. Principle activities performed during design phase include the allocation of functions between computer programs, equipment and manual task, and database design and test requirements definition.

Design phase begins with system design. This step involves allocation of function. Effective input design minimizes errors made by data entry operators. Output designs have been ongoing activity almost from the beginning of the project. Here the, layout for all the system outputs are prepared.

Abbildung in dieser Leseprobe nicht enthalten

Figure 1. Contex level diagram; Level_0 DFD

Abbildung in dieser Leseprobe nicht enthalten

Figure 2. Top Level DFD for user

Abbildung in dieser Leseprobe nicht enthalten

Figure 3. Level_2 DFD for user process 4; file upload.

Figure 4. Level_2 DFD for user process 5; file upload.

Abbildung in dieser Leseprobe nicht enthalten

Figure 5. Level_2 DFD for user process 6; verified files.

Abbildung in dieser Leseprobe nicht enthalten

Figure 6. Top level DFD for TPA.

Abbildung in dieser Leseprobe nicht enthalten

Figure 7. Top level DFD for Admin.

3.8 Input design

Input design is a part of overall system design, which requires very careful attention. If data going into the system is correct, then the processing and output will magnify these errors. Thus the designer has a number of clear objectives in the different stages of input design

- To produce a cost effective method of input.
- To achieve the highest possible level of accuracy.
- To ensure that input is acceptable to and understand by the user.

[...]

Fin de l'extrait de 69 pages

Résumé des informations

Titre
Cloud information accountability. An overview
Université
Mar Augusthinose College
Note
1,5
Auteurs
Année
2017
Pages
69
N° de catalogue
V367199
ISBN (ebook)
9783668456204
ISBN (Livre)
9783668456211
Taille d'un fichier
20955 KB
Langue
anglais
Mots clés
cloud
Citation du texte
Dr. Prem Jose Vazhacharickal (Auteur)Sunil K. Joseph (Auteur)Abhiram Vijayan (Auteur), 2017, Cloud information accountability. An overview, Munich, GRIN Verlag, https://www.grin.com/document/367199

Commentaires

  • Pas encore de commentaires.
Lire l'ebook
Titre: Cloud information accountability. An overview



Télécharger textes

Votre devoir / mémoire:

- Publication en tant qu'eBook et livre
- Honoraires élevés sur les ventes
- Pour vous complètement gratuit - avec ISBN
- Cela dure que 5 minutes
- Chaque œuvre trouve des lecteurs

Devenir un auteur