https://youtu.be/nbt1mbkwWk4
Nama : Chyntia Dwinovita
NPM : 52414396
Kelas : 4IA22
Materi : 1 & 2
Senin, 04 Juni 2018
Selasa, 17 April 2018
Computation Theory and Cloud Computing
Name : Chyntia Dwinovita
NPM : 52414396
1.
Introduction
a.
Coputational Theory
Computational theory is the branch of computer
science and mathematics that discusses whether and how a problem can be solved
on computational models, using algorithms. This field of science deals
primarily with regard to computability and complexity, in relation to
computational formalism.
To conduct computational studies closely, computer
scientists work with mathematical abstractions from computers called
computational models. There are several models used, but the most commonly
studied Turing machine. A Turing machine can be thought of as a personal
computer table with infinite memory capacity, but can only be accessed in
separate and discrete sections. Computer scientists study Turing machines
because they are easily formulated, analyzed and used for proof, and because
they represent a computational model that is considered to be the most
powerful, most sensible model possible. Unlimited memory capacity may be seen
as an unattainable trait, but any "decidable" problem solved by a
Turing machine will always only require a finite amount of memory. So basically
any problem that can be solved (decided) by meisn Turing can be solved by a
computer that has a limited amount of memory.
b.
Implementation
of Modern Computing
-
Implementation
of Modern Computing in the Field of Economics
Implementation in economic science is the study of
agent-based computational modeling, computational econometrics and statistics,
financial computing, computational modeling of dynamic macroeconomic systems,
programming specifically designed for economic computing, and the development
of aids in computing economics education. the problem that must be solved by
the algorithm for example is to solve the theory of statistics to solve the
financial problems.
One example of computing in economics is statistical
computing. Statistical computing is the department that studies the techniques
of data processing, program making, and data analysis and statistical
information system preparation techniques such as database compilation, data
communications, network systems, and statistical data dissemination. Computing
can be used to solve economic problems such as Data Mining, with data mining, a
company can solve problems in the most effective way possible.
-
Implementation
of Modern Computation in Chemistry
Implementation of modern computation in the field of
chemistry is Computational Chemistry is the use of computer science to help
solve chemical problems, for example the use of super computers to calculate
the structure and molecular properties. The term chemical theory can be defined
as a mathematical description for chemistry, whereas computational chemistry is
typically used when mathematical methods are developed well enough to be used
in computer programs. It should be noted that the word "exact" or
"perfect" does not appear here, because very few chemical aspects can
be calculated appropriately. Almost all aspects of chemistry can be described
in the scheme of quantitative or qualitative computing almost.
-
Implementation
of Modern Computation in Mathematics.
Resolving a problem relating to mathematical
calculations, but in the sense that will be discussed in the discussion of
modern computing is a system that will solve mathematical problems using a
computer by compiling algorithms that can be understood by computers that are
useful for solving human problems. There is a numerical analysis that is an
algorithm used to analyze mathematical problems. For example, the application
of mathematical computational techniques includes numerical methods, scientific
computing, finite element method, different methods, scientific data mining,
scientific process control and other related methods to solve real estate
issues are large.
-
Implementation
of Computing in the Field of Geography
Geography is the study of location and equation, and
spatial variations of physical phenomena, and humans on the surface of the
earth. Computing in geology is usually used for weather forecasting, especially
in Indonesia there is one state agency with the name of BMKG (Meteorology,
Climatology and Geophysics Agency), namely the state agency that examines
observing the meteorology of air quality and geophysical climatology in order
to stay in accordance with the legislation applicable in Indonesia.
-
Implementation
of modern Computation in the field of Physics
The implementation of modern computing in the field
of physics is Computational Physics which studies a combination of Physics,
Computer Science and Applied Mathematics to provide solutions to "complex
and complex events in the real world" either by using simulations as well
as the proper use of algorithms. The understanding of physics in theory,
experimentation, and computation must be comparable in order to produce the
appropriate numerical and visualization / modeling solutions to understand the
problems of Physics. To perform work such as integral evaluation, solving
differential equations, solving simultaneous equations, plotting a function /
data, making the development of a series of functions, finding the roots of
equations and working with complex numbers that are the object of applying
computational physics. Many software or languages are used, both MatLab,
Visual Basic, Fortran, Open Source Physics (OSP), Labview, Mathematica, etc.
are used for understanding and finding numerical solutions to problems in
Computational Physics.
-
Implementation
of modern computing in the field of geology
In the field of geology computational theory is
usually used for mining, a computer system is used to analyze mineral materials
and minerals contained in the soil. For example, Mining and used to analyze
mineral materials and minerals contained in the soil.
2.
Cloud Computing
a.
Introduction
Cloud computing is a method for delivering
information technology (IT) services in which resources are retrieved from the
Internet through web-based tools and applications, as opposed to a direct
connection to a server. Rather than keeping files on a proprietary hard drive
or local storage device, cloud-based
storage makes it possible to save them to a remote database. As long as an
electronic device has access to the web, it has access to the data and the
software programs to run it.
It's called cloud computing because the information
being accessed is found in "the cloud" and does not require a user to
be in a specific place to gain access to it. This type of system allows
employees to work remotely. Companies providing cloud services enable users to
store files and applications on remote servers, and then access all the data
via the internet.
Advantages of
Cloud Computing:
-
Cost
Cloud computing eliminates the capital expense of
buying hardware and software and setting up and running on-site datacenters—the
racks of servers, the round-the-clock electricity for power and cooling, the IT
experts for managing the infrastructure. It adds up fast.
-
Speed
Most cloud computing services are provided self
service and on demand, so even vast amounts of computing resources can be
provisioned in minutes, typically with just a few mouse clicks, giving
businesses a lot of flexibility and taking the pressure off capacity planning.
-
Global Scale
The benefits of cloud computing
services include the ability to scale elastically. In cloud speak, that means
delivering the right amount of IT resources—for example, more or less computing
power, storage, bandwidth—right when its needed and from the right geographic
location.
-
Productivity
On-site datacenters typically
require a lot of “racking and stacking”—hardware set up, software patching and
other time-consuming IT management chores. Cloud computing removes the need for
many of these tasks, so IT teams can spend time on achieving more important business
goals.
-
Performance
The biggest cloud computing services run on a
worldwide network of secure datacenters, which are regularly upgraded to the
latest generation of fast and efficient computing hardware. This offers several
benefits over a single corporate datacenter, including reduced network latency
for applications and greater economies of scale.
-
Reliability
Cloud computing makes data backup,
disaster recovery and business continuity easier and less expensive, because
data can be mirrored at multiple redundant sites on the cloud provider’s
network.
Disadvantages
of Cloud Computing
Initially, security was seen as a
detractor from using the cloud, especially when it came to sensitive medical
records and financial information. While regulations are forcing cloud
computing services to shore up their security and compliance measures, it
remains an ongoing issue. Media headlines are constantly screaming about data
breaches at this or that company, in which sensitive information has made its
way into the hands of malicious hackers who may delete, manipulate or otherwise
exploit the data (though, according to some reports, most of the data breeches
have been with on-site systems, not those in the cloud). Encryption
protects vital information, but if the encryption key is lost, the data
disappears.
Servers maintained by cloud computing companies can
fall victim to a natural disasters, internal bugs and power outages, too. And
unfortunately, the geographical reach of cloud computing cuts both ways: A
blackout in California could paralyze users In New York; a firm in Texas could
lose its data if something causes its Maine-based provider to crash.
Ultimately, as with any new technology, there is a
learning curve for employees and managers. But with many individuals accessing
and manipulating information through a single portal, inadvertent mistakes can
transfer across an entire system.
Types of Cloud
Computing
-
SAAS
Saas (Software as a Service) provides clients with the ability to use
software applications on a remote basis via an internet web browser. Software
as a service is also referred to as “software on demand”.
Clients can access SaaS applications from anywhere via the web because
service providers host applications and their associated data at their
location. The primary benefit of SaaS, is a lower cost of use, since subscriber
fees require a much smaller investment than what is typically encountered under
the traditional model of software delivery. Licensing fees, installation costs,
maintenance fees and support fees that are routinely associated with the
traditional model of software delivery can be virtually eliminated by
subscribing to the SaaS model of software delivery. Examples of SaaS include:
Google Applications and internet based email applications like Yahoo! Mail,
Hotmail and Gmail.
-
PAAS
PaaS (Platform as a Service) provides clients with the ability to develop
and publish customized applications in a hosted environment via the web.
It represents a new model for software development that is rapidly increasing
in its popularity. An example of PaaS is Salesforce.com
PaaS provides a framework for agile software development, testing,
deployment and maintenance in an integrated environment. Like SaaS, the
primary benefit of PaaS, is a lower cost of use, since subscriber fees require
a much smaller investment than what is typically encountered when implementing
traditional tools for software development, testing and deployment. PaaS
providers handle platform maintenance and system upgrades, resulting in a more
efficient and cost effective solution for enterprise software
development.
development.
-
IAAS
IaaS (Infrastructure as a Service) allows clients to remotely use IT
hardware and resources on a “pay-as-you-go” basis. It is also referred to as
HaaS (hardware as a service). Major IaaS players include companies
like IBM, Google and Amazon.com. IaaS employs virtualization, a method of
creating and managing infrastructure resources in the “cloud”. IaaS provides
small start up firms with a major advantage, since it allows them to gradually
expand their IT infrastructure without the need for large capital investments
in hardware and peripheral systems.
b.
Grid Computing
Grid Computing
is the use of resources that involve multiple distributed and geographically
dispersed computers to solve large-scale computing problems. Grid computing is a branch of distributed computing. The
computer grid has a more prominent difference and is applied to the
infrastructure side of the completion of a process.
Grid computing is a form of clusters (composites)
computers that tend not bound by geographical boundaries. On the other hand,
clusters are always implemented in one place by combining multiple computers
over a network.
The initial idea of grid computing begins with the
existence of distributed computing, ie studying the use of co-ordinated
computers that are physically separate or distributed. Distributed systems
require different applications with centralized systems. Then it develops again
into parallel computing which is a computing technique simultaneously by
utilizing multiple computers simultaneously.
Grid computing offers a low-cost computing solution,
which utilizes scattered and heterogeneous resources and easy access from
anywhere. Globus Toolkit is a collection of software and libraries for creating
open-source grid computing environments. With the grid computing environment is
expected to simplify and optimize the execution of programs that use parallel
libraries. And Indonesia has used Grid system and named InGrid (Inherent Grid).
The grid computing system began operations in March 2007
and continues to be developed to date. InGrid connects several public and
private universities spread across Indonesia and several government agencies
such as the Meteorology and Geophysics Agency.
Concept of Grid
Computing
Some basic concepts of grid computing:
1. Resources are managed and controlled locally.
2. Different resources may have different policies and
mechanisms, including Computing resources managed by different batch systems,
Different storage systems at different nodes, Different policies are entrusted
to the same user on different resources on the Grid.
3. Dynamic nature: Resources and users can change frequently
4. Collaborative environment for e-community (electronic
community, on the internet)
5. Three things are shared, sharing in a grid system, among
others: Resource, Network and Process. The usefulness / service of the grid
system itself is to perform high throughput computing in the field of research,
or other computing process that requires a lot of computer resources.
How
Grid Computing Works
According to a short article by Ian Foster there is a
check-list that can be used to identify that a system doing grid computing is:
1.
The
system coordinates the computational resources that are not under a centralized
control. If the resources used are within a single domain of administrative
domains, then the computation can not be said to be grid computing
2. The system uses standards and protocols that are open
(not linked to a particular implementation or product). Grid computing is
composed of agreements on fundamental issues, required to realize large-scale
computing together. Agreements and standards required are in the areas of
authentication, authorization, resource search, and access to resources.
3. The system strives to achieve sophisticated quality of
service, (nontrivial quality of service) that is far above the quality of
service of individual components of the grid computing.
Advantages
and Disadvantages of Grid Computing
The use of Grid Computing System for companies will
provide many benefits, either directly or indirectly. Some of these benefits
include:
1. Grid computing promises increased utilities, and greater
flexibility for infrastructure resources, applications and information. And
also promising increased productivity of company work.
2. Grid computing can provide money savings, both in terms
of capital investment and operating costs.
And some of the obstacles experienced by Indonesian
people in applying grid computing technology are as follows:
1. The overly bureaucratic management of the institutions
causes them to be reluctant to give up their facilities for joint use in order
to gain greater benefits for the wider community.
2. Still at least Human Resources are competent in managing
grid computing. Contonhya lack of sufficient knowledge for IT technicians as
well as non-technical users about the benefits of grid computing itself.
Grid
Computing Example
-
Scientific
Simulation
Grid computing is implemented in physics, chemistry, and
biology to simulate complex processes.
-
Medical
Images
Use of grid data and grid computing to store
medical-image. An example is the eDiaMoND project.
-
Computer-Aided
Drug Discovery (CADD)
Grid computing is used to aid drug discovery. One example
is: Molecular Modeling Laboratory (MML) at the University of North Carolina
(UNC).
-
Big
Science
Grid data and grid computing are used to assist
government-sponsored laboratory projects. Examples are found in DEISA.
-
E-Learning
Grid computing helps build infrastructure to meet the
needs of information exchange in education. An example is AccessGrid.
c.
Virtualization
in Cloud Computing
Virtualization is
the "creation of a virtual (rather than actual) version of something, such
as a server, a desktop, a storage device, an operating system or network
resources".
In other words, Virtualization is a technique, which
allows to share a single physical instance of a resource or an application
among multiple customers and organizations. It does by assigning a logical name
to a physical storage and providing a pointer to that physical resource when
demanded.
What is the concept
behind the Virtualization?
Creation of a virtual machine over existing
operating system and hardware is known as Hardware Virtualization. A Virtual
machine provides an environment that is logically separated from the underlying
hardware.
The machine on which the virtual machine is going to
create is known as Host Machine and
that virtual machine is referred as a Guest Machine
Types of
Virtualization:
1.
Hardware
Virtualization
When the virtual machine software or virtual machine
manager (VMM) is directly installed on the hardware system is
known as hardware virtualization.
The main job of hypervisor is to control and
monitoring the processor, memory and other hardware resources.
After virtualization of hardware system we can
install different operating system on it and run different applications on
those OS.
Usage :
Hardware
virtualization is mainly done for the server platforms, because controlling
virtual machines is much easier than controlling a physical server.
2.
Operating
System Virtualization
When the virtual machine software or virtual machine
manager (VMM) is installed on the Host operating system instead
of directly on the hardware system is known as operating system virtualization.
Usage :
Operating
System Virtualization is mainly used for testing the applications on different
platforms of OS.
3.
Server
Virtualization
When the virtual machine software or virtual machine
manager (VMM) is directly installed on the Server system is
known as server virtualization.
Usage :
Server
virtualization is done because a single physical server can be divided into
multiple servers on the demand basis and for balancing the load.
4.
Storage
Virtualization
Storage virtualization is the process of grouping the physical storage
from multiple network storage devices so that it looks like a single storage
device. Storage
virtualization is also implemented by using software applications.
Usage :
Storage
virtualization is mainly done for back-up and recovery purposes.
How does virtualization
work in cloud computing?
Virtualization plays
a very important role in the cloud computing technology, normally in the cloud
computing, users share the data present in the clouds like application etc, but
actually with the help of virtualization users shares the Infrastructure.
The main
usage of Virtualization Technology is to provide the applications
with the standard versions to their cloud users, suppose if the next version of
that application is released, then cloud provider has to provide the latest
version to their cloud users and practically it is possible because it is more
expensive.
To overcome this problem we use basically
virtualization technology, By using virtualization, all severs and the software
application which are required by other cloud providers are maintained by the
third party people, and the cloud providers has to pay the money on monthly or
annual basis.
d. Distributed
Computing
Distributed Computing can be defined as the use
of a distributed system to solve a single large problem by breaking it down
into several tasks where each task is computed in the individual computers of
the distributed system. A distributed system consists of more than one self
directed computer that communicates through a network. All the computers
connected in a network communicate with each other to attain a common goal by
making use of their own local memory. On the other hand, different users of a
computer possibly might have different requirements and the distributed systems
will tackle the coordination of the shared resources by helping them communicate
with other nodes to achieve their individual tasks.
Generally, in case of individual computer failures
there are toleration mechanisms in place. However, the cardinality, topology
and the overall structure of the system is not known beforehand and everything
is dynamic.
Distributed
Computing System Examples :
-
World Wide Web
-
Social Media Giant
Facebook
-
Hadoop’s Distributed File System
(HDFS)
-
ATM
-
Cloud Network
Systems(Specialized form of Distributed Computing Systems)
-
Google Bots, Google Web
Server, Indexing Server
To a normal user, distributed computing systems appear as a single
system whereas internally distributed systems are connected to several nodes
which perform the designated computing tasks. Let’s consider the Google web
server from user’s point of view. When users submit a search query they believe
that Google web server is single system where they need to log in to Google.comand
search for the required term. What really happens is that underneath is a
Distributed Computing technology where Google develops several servers and
distributes them in different geographical locations to provide the search
result in seconds or at time milliseconds.
The below image illustrates the working of
master/slave architecture model of distributed computing architecture where the
master node has unidirectional control over one or more slave nodes. The task
is distributed by the master node to the configured slaves and the results are
returned to the master node.
Benefits
of Distributed Computing
-
Distributed computing
systems provide a better price/performance ratio when compared to a centralized
computer because adding microprocessors is more economic than mainframes.
-
Distributed Computing
Systems have more computational power than centralized (mainframe) computing
systems. Distributed Computing Systems provide incremental growth so that
organizations can add software and computation power in increments as and when
business needs.
e.
Map Reduce and NoSQL (Not Only SQL)
Map Reduce andNoSQL (Not Only SQL) is a framework
programming to help users develop a large size data that can be distributed to
each other. Map-Reduce is one of the most important technical concepts in cloud
technology especially because it can be implemented in a distributed computing
environment. Thus it will guarantee the scalability of our application.
One example of the real implementation of this
map-reduce in a product is what Google does. With inspiration from the
functional programming map and reducing Google can generate a highly scalable distributed
filesystem, Google Big Table. And also inspired by Google, in the realm of open
source visible acceleration development of other framework that is also
distributed and using the same concept, open source project is named Apache
Hadoop.
MapReduce is a Google-released programming model
intended to process giant distributed and parallel data in a cluster of
thousands of computers. In processing the data, MapReduce is divided into 2
main processes, namely Map and Reduce. The Map process is tasked with
collecting information from the pieces of data distributed within each computer
in the cluster (group of connected computers). The result is left to the Reduce
process for further processing. The result of the Reduce process is the final
result sent to the user.
NoSQL is a database type that is very much different
from the concept of RDBMS or ODBMS. The main difference itself is that it does
not recognize the term relation and does not use the concept of schema. In
NoSQL, each table stands alone independent of other tables. NoSQL Database is a
database of type NoSQL, ie this database is not familiar with the term
relational and does not use the concept of schema. Examples of NoSQL Databases
are MongoDB.
Contrary to the misunderstanding caused by its name,
NoSQL does not prohibit structured query languages (SQL) While it is true that
some NoSQL systems are completely non-relational, others just avoid selected
relational functions like fixed table schemes and join operations. For example,
instead of using tables, NoSQL databases might organize data into objects, key
/ value pairs or tuples.
f.
NoSQL Database
NoSQL is an approach to databases that represents a
shift away from traditional relational database management systems (RDBMS). To
define NoSQL, it is helpful to start by describing SQL, which is a query
language used by RDBMS. Relational databases rely on tables, columns, rows, or
schemas to organize and retrieve data. In contrast, NoSQL databases do not rely
on these structures and use more flexible data models. NoSQL can mean “not SQL”
or “not only SQL.” As RDBMS have increasingly failed to meet the performance,
scalability, and flexibility needs that next-generation, data-intensive
applications require, NoSQL databases have been adopted by mainstream
enterprises. NoSQL is particularly useful for storing unstructured data, which
is growing far more rapidly than structured data and does not fit the
relational schemas of RDBMS. Common types of unstructured data include: user
and session data; chat, messaging, and log data; time series data such as IoT
and device data; and large objects such as video and images.
TYPES OF NOSQL
DATABASES
Several different varieties of NoSQL databases have
been created to support specific needs and use cases. These fall into four main
categories:
1. Key-value
data stores: Key-value NoSQL databases emphasize simplicity
and are very useful in accelerating an application to support high-speed read and
write processing of non-transactional data. Stored values can be any type of
binary object (text, video, JSON document, etc.) and are accessed via a key.
The application has complete control over what is stored in the value, making
this the most flexible NoSQL model. Data is partitioned and
replicated across a cluster to get scalability and availability. For this
reason, key value stores often do not support transactions. However, they are
highly effective at scaling applications that deal with high-velocity,
non-transactional data.
2. Document
stores: Document databases typically
store self-describing JSON, XML, and BSON documents. They are similar to
key-value stores, but in this case, a value is a single document that stores
all data related to a specific key. Popular fields in the document can be
indexed to provide fast retrieval without knowing the key. Each document can
have the same or a different structure.
3. Wide-column
stores: Wide-column NoSQL databases store data in tables with rows and
columns similar to RDBMS, but names and formats of columns can vary from row to
row across the table. Wide-column databases group columns of related data
together. A query can retrieve related data in a single operation because only
the columns associated with the query are retrieved. In an RDBMS, the data
would be in different rows stored in different places on disk, requiring
multiple disk operations for retrieval.
4. Graph
stores: A graph database uses graph structures to store, map, and query
relationships. They provide index-free adjacency, so that adjacent elements are
linked together without using an index.
Multi-modal
databases leverage some combination of the four types described above and
therefore can support a wider range of applications.
BENEFITS OF NOSQL
NoSQL
databases offer enterprises important advantages over traditional RDBMS,
including:
1. Scalability: NoSQL
databases use a horizontal scale-out methodology that makes it easy to add or
reduce capacity quickly and non-disruptively with commodity hardware. This
eliminates the tremendous cost and complexity of manual sharding that is
necessary when attempting to scale RDBMS.
2. Performance: By
simply adding commodity resources, enterprises can increase performance with
NoSQL databases. This enables organizations to continue to deliver reliably
fast user experiences with a predictable return on investment for adding
resources—again, without the overhead associated with manual sharding.
3. High
Availability: NoSQL databases are generally designed to ensure high
availability and avoid the complexity that comes with a typical RDBMS
architecture that relies on primary and secondary nodes. Some “distributed”
NoSQL databases use a masterless architecture that automatically distributes
data equally among multiple resources so that the application remains available
for both read and write operations even when one node fails.
4. Global
Availability: By automatically replicating data across multiple servers,
data centers, or cloud resources, distributed NoSQL databases can minimize latency
and ensure a consistent application experience wherever users are located. An
added benefit is a significantly reduced database management burden from manual
RDBMS configuration, freeing operations teams to focus on other business
priorities.
5. Flexible
Data Modeling: NoSQL offers the ability to implement flexible and fluid
data models. Application developers can leverage the data types and query
options that are the most natural fit to the specific application use case
rather than those that fit the database schema. The result is a simpler
interaction between the application and the database and faster, more agile
development.
References :
http://deynarkhairunnisa.blogspot.co.id/2015/10/pengantar-komputasi-grid.html
http://febbri-grunge.blogspot.co.id/2015/06/komputasi-grid-grid-computing.html
https://id.wikipedia.org/wiki/Komputasi_grid
https://dyaherwiyanti.wordpress.com/2016/03/28/map-reduce-dan-nosql-not-only-sql/
https://www.javatpoint.com/virtualization-in-cloud-computing
Langganan:
Postingan (Atom)