Saturday, 19 December 2020

Five reasons to use cloud virtual desktops to support the hybrid workforce

The transition to remote work has boosted interest in virtual desktop infrastructure (VDI) to provide a better and more secure experience.

When the pandemic began, the priority was to ensure business continuity, said Gabe Knuth, Marketing Manager with VMware at a recent ITWC webinar. “Now we’re trying to figure out how to set ourselves up for the digital-first new normal,” he said. “To do so, organizations need to adopt agile technologies, such as virtual apps and desktops that seamlessly integrate into a user’s work environment.

Working from home has not been without some challenges, Knuth said. Users have complained of a degraded experience with VPN. Some have connectivity challenges and security is a constant concern.

“With everyone working remotely, the need for a consistent user experience regardless of workload or user location has never been more important,” said Scott Matthewson, Innovation Services Lead with Softchoice.

How VDI helps prepare your organization for the future of work

Desktop virtualization allows users to access virtual desktops and applications from a connected device. Knuth explained that it “empowers the future-ready workforce” in five ways:

  1. Supports hybrid environments. In a 2019 VMware survey, 91 per cent of users said that hybrid is a nice to have or a must-have capability. Seventy-seven per cent expect multi-cloud to be part of their environment. VDI allows organizations to deploy desktops from anywhere, whether it is on-premises or in the cloud, said Knuth. “And you can manage it all from a single pane of glass,” he said. “You don’t have to manage separate silos of desktops and apps, wherever they happen to be.”
  1. Simplifies management. A virtual desktop service allows admins to package and deploy an application to any environment one time, as well as enforce user settings across platforms, said Knuth. Another big advantage is that it can allow remote access to corporate resources very quickly. “We had one customer that needed to spin up 35,000 desktops and we were able to help them do that in just five days,” said Knuth. A business analysis estimates the savings in support and administrative efficiency to be $216,000 over three years.
  1. Improves end-to-end security. VDI uses built-in security to provide security across the infrastructure. It stores the data in the cloud instead of on local desktops.
  1. Serves as a trusted digital foundation for a modern architecture. A VDI service can adapt and integrate easily with an organization’s existing active directory or with other identity providers.
  1. Enhances user experience. The improvement in user experience is the most important advantage, said Knuth. “We want the users to be able to work from anywhere and on any device,” he said. “We want their productivity to stay the same, even though they may be working from home.” VDI supports real-time audio and video for applications like Zoom and Microsoft Teams. It has adaptive protocols that adjust to changing network conditions that impact latency. “So, when the kids fire up Netflix on their iPads, the experience of the end users will be maintained. The ultimate goal is to make sure the user experience is as close to local or better.”

Wednesday, 16 September 2020

Save on data center operating cost and move to cloud

 

The cost of running a traditional data center

Although each data center is a little different, the average cost per year to operate a large data center is usually between $10 million to $25 million.

  • 42 percent: Hardware, software, disaster recovery arrangements, uninterrupted power supplies, and networking. (Costs are spread over time, amortized, because they are a combination of capital expenditures and regular payments.)

  • 58 percent: Heating, air conditioning, property and sales taxes, and labor costs. (In fact, as much as 40 percent of annual costs are labor alone.)

The reality of the traditional data center is further complicated because most of the costs maintain existing (and sometimes aging) applications and infrastructure. Some estimates show 80 percent of spending on maintenance.

  • Most data centers run a lot of different applications and have a wide variety of workloads.

  • Many of the most important applications running in data centers are actually used by only a relatively few employees.

  • Some applications that run on older systems are taken off the market (no longer sold) but are still necessary for business.

Because of the nature of these applications, it probably wouldn’t be cost effective to move these environments to the cloud.

The cost of running a cloud data center

In this case cloud data centers means data centers with 10,000 or more servers on site, all devoted to running very few applications that are built with consistent infrastructure components (such as racks, hardware, OS, networking, and so on).

Cloud data centers are

  • Constructed for a different purpose.

  • Created at a different time than the traditional data center.

  • Built to a different scale.

  • Not constrained by the same limitations.

  • Perform different workloads than traditional data centers.

Because of this design approach, the economics of a cloud data center are significantly different.

Estimates for how much it costs to build a cloud data center include three cost factors:

  • Labor costs are 6 percent of the total costs of operating the cloud data center.

  • Power distribution and cooling are 20 percent.

  • Computing costs are 48 percent.

Of course, the cloud data center has some different costs than the traditional data center (such as buying land and construction).


Tuesday, 28 July 2020

Migrating to cloud increases productivity and lowers IT costs for SMB

Interviews with nine companies examined the impact on revenue as well as how long it took to recoup the investment in the cloud platform.

idc-googlecloudreport-2.jpg

IDC interviewed nine small- and medium-sized companies using Google Cloud Platform to understand how the cloud provider influences business agility, efficiency, and productivity. 

Image: IDC


IDC analysts Shari Lava and Matthew Marden shared the findings in the new report, "The Business Value of Improved Performance and Efficiency with Google Cloud Platform." The report was sponsored by Google Cloud. Developers, analysts, and IT teams are more efficient when working on Google Cloud Platform, according to a new analysis by IDC. The researchers interviewed nine small and midsize businesses to determine the impact of the cloud platform on IT costs, productivity, and business agility.

This increased productivity and efficiency helped the companies take advantage of new business opportunities and increase revenue as well. 

Other findings include:

  • 16% higher revenue per organization per year
  • 19% higher developer productivity
  • 26% lower IT infrastructure costs

This study included interviews with nine SMBs running most of their business workloads on Google Cloud Platform. On average, the companies have 87 employees and annual revenue of $10.5 million. The interviewed businesses have IT teams with 34 staff members on average, most of whom are focused on development efforts. Seven of the nine companies interviewed for the study shared their observations in the report, including:

  • Albo: Digital banking
  • Gesto: Artificial intelligence and gesture recognition
  • Grasshopper: Digital banking
  • idwall: Identity management
  • Logically: Artificial intelligence and information overload
  • SoundCommerce: E-commerce data analysis
  • WatchRX: Wearable for older adults 

The companies were based in Brazil, the US, Mexico, the United Kingdom, Australia, and Singapore that covered financial services, software, IT services, digital health, insurance, and technology.

Here are the details of how the companies used Google Cloud to boost efficiency and agility.

Increasing efficiency on the dev team 

The IDC analysts found that businesses using Google Cloud Platform had these benefits in the development life cycle:

  • 21% faster deployments for new applications
  • 34%  faster for new features

The businesses in the report also were able to release more new features each year, going from an average of 86 features before using Google Cloud to 166 after switching to that platform.

The report authors said that these improvements reflect increased value for SMB development teams:
"For small and medium-sized businesses, it is essential that they maximize the value of these teams as they are closely linked to their ability to serve customers and their other employees, as well as compete against larger companies with more resources."

IDC found that the companies interviewed for the report had an increase in productivity for their development teams of 19%, which represents the equivalent of more than four additional development team members.

Several companies interviewed for the report said that using the Google Cloud Platform made it easier to implement more DevOps approaches to software development.

T-Kiang Tan, the chief investment officer at Grasshopper, said in the report that Google Cloud has allowed developers and researchers to be more responsive to business needs and spend less time on infrastructure issues.

"For developers, having the infrastructure available to them makes it easier for them to experiment because they don't have to worry about capacity," Tan said. "They're around 20% more productive." 

Faster response to new customer demands

They are realizing higher revenue by better addressing business opportunities and enabling faster delivery of new applications and services. Interviewed companies are competing to gain a foothold or expand their businesses in competitive markets in which they must be able to seamlessly deliver products and services to their customers.

Gabriel Prado, the chief technology at idwall, said in the report that Google Cloud allows his company to make decisions in real time because there is no lag in collecting information.

In addition to benefiting tech teams, the researchers found that end users saw benefits as well with a 53% higher level of productivity for analytics teams. 

"We used to spend a lot of time correcting data, so the data platform with Google has been a big improvement," Prado said.  

Lower infrastructure costs

The report found that IT infrastructure costs were 26% lower among the nine companies in the survey. Several study participants listed automated patching, the use of prefigured virtual machines, and autoscaling with Kubernetes Engine as key tools in optimizing IT infrastructure costs. 

Also, Roberto Gaziola Junior, chief technology officer at GESTO, said in the report that Google Cloud's serverless features are cheaper than the other cloud platforms the company considered.

"We run and then destroy virtual machines on a regular basis, so serverless saves us money," he said. 

ROI analysis 

Although many companies deal with stalled cloud projects and spend more than anticipated, IDC found that the companies in this report achieved a three-year ROI of 222% and broke even on their cloud investment in eight months.

IDC analysts used a three-step process to calculate the ROI and payback period: 

  1. Gathered quantitative benefit information during the interviews using a before-and-after assessment of the impact of using Google Cloud Platform.
  2. Conducted a three-year total cost analysis profile based on the interviews, which included additional costs related to migrations, planning, consulting, and staff or user training.
  3. Calculated the ROI and payback period via a depreciated cash flow analysis of the benefits and investments for the organizations' use of Google Cloud Platform over a three-year period.

Sunday, 5 July 2020

Need of Security for Serverless Architectures

Serverless architectures (also referred to as “FaaS” - Function as a Service) enable organizations to build and deploy software and services without maintaining or provisioning any physical or virtual servers. Applications built using serverless architectures are suitable for a wide range of services, and can scale elastically as cloud workloads grow while also helping developers concentrate on writing business logic without having to manage the hosting and infrastructure.
A serverless architecture can be used to solve many different problems and use cases, for example as a backend service for a Web Application. Indeed, serverless applications are becoming very popular. However, serverless has some unique security issues. In this article, we’ll explore a few security challenges from injection attacks, and discuss the best practices to resolve them.


The following image, demonstrates the shared security responsibilities model, adapted to serverless architectures:
In serverless architectures, the serverless provider is responsible for securing the data center, network, servers, operating systems and their configurations. However, application logic, code, data and application-layer configurations still need to be robust and resilient to attacks, which is the responsibility of application owners.


The comfort and elegance of serverless architectures is not without its drawbacks.
Below are the Serverless Architectures Security Top 10 List where SAS-1 indicates the most critical risk, and SAS-10 the least critical risk:

SAS-1: Function Event Data Injection
Serverless functions can consume input from each type of event source, and such event input might include different message formats, depending on the type of event and its source. In the context of serverless architectures, function event-data injections are not strictly limited to direct user input, such as input from a web API call. The various parts of these event messages can contain attacker-controlled or otherwise dangerous inputs.

SAS-2: Broken Authentication
A weak authentication implementation might enable an attacker to bypass application logic and manipulate its flow, potentially executing functions and performing actions that were not supposed to be exposed to unauthenticated users.

SAS-3: Insecure Serverless Deployment Configuration
One extremely common weakness that affects many applications that use cloud-based storage is incorrectly configured cloud storage authentication and/or authorization. Since one of the recommended best practice designs for serverless architectures is to make functions stateless, many applications built for serverless architectures rely on cloud storage infrastructure to store and persist data between executions. With insecure cloud storage configurations, sensitive confidential corporate information to unauthorized users can be exposed. 

SAS-4: Over-Privileged Function Permissions & Roles
Serverless applications should always follow the principle of "least privilege". This means that a serverless function should be given only those privileges, which are essential in order to perform its intended logic. In a system where all functions share the same set of over-privileged permissions, a vulnerability in a single function can eventually escalate into a system-wide security catastrophe.

SAS-5: Inadequate Function Monitoring and Logging
While many serverless architecture vendors provide extremely capable logging facilities, these logs in their basic/out-of-the-box configuration, are not always suitable for the purpose of providing a full security event audit trail. In order to achieve adequate real-time security event monitoring with proper audit trail, serverless developers and their DevOps teams are required to stitch together logging logic that will fit their organizational needs.

SAS-6: Insecure 3rd Party Dependencies
Oftentimes, in order to perform a single discrete task, the serverless function will be required to depend on third party software packages, open source libraries and even consume 3rd party remote web services through API calls which can become vulnerable when importing code from a vulnerable 3rd party dependency.

SAS-7: Insecure Application Secrets Storage
As applications grow in size and complexity, there is a need to store and maintain "application secrets". Storing these secrets in plain text as environmental variables is also a common mistake where environment variables can leak and reach the wrong hands.

SAS-8: Denial of Service & Financial Resource Exhaustion
While serverless architectures bring a promise of automated scalability and high availability, they do impose some limitations and issues which require attention. We have seen a dramatic increase in the frequency and volume of Denial of Service (DoS) attacks. Such attacks became one of the primary risks facing almost every company exposed to the Internet.

SAS-9: Serverless Function Execution Flow Manipulation
Manipulation of application flow may help attackers to subvert application logic. Using this technique, an attacker may sometimes bypass access controls, elevate user privileges or even mount a Denial of Service attack.

SAS-10: Improper Exception Handling and Verbose Error Messages
Some developers adopt the use of verbose error messages, enable debugging environment variables and eventually forget to clean the code when moving it to the production environment. Verbose error messages such as stack traces or syntax errors, which are exposed to end users, may reveal details about the internal logic of the serverless function, and in turn reveal potential weaknesses, flaws or even leak sensitive data.


For all Internet-facing applications, robust security is essential.

As many organizations are still exploring serverless architectures, or just making their first steps in the serverless world, we believe that securing them is critical for their success in building robust, secure, and reliable applications. We believe it is essential to scrub all HTTP/S traffic using a unified cloud-based platform that includes a next-generation WAF, DDoS protection, advanced bot management and mitigation, API security, and much more.
However, serverless architectures raise an additional question. What about events that can trigger functions directly? For example, let’s say a Lambda function is written that will process files which are uploaded to an S3 bucket. In this case, protection is not provided, and malicious user activity could result in a security compromise because the event data does not pass through the WAF for processing.
In cases like these, a small amount of effort can provide large security dividends. Although it is convenient to allow users to trigger functions when they utilize backend services, a more optimal approach is to structure functions so that user inputs always pass through the Web Application Firewall (WAF) to examine the results of all events, and defeat injection attempts.
Taking this approach offers significant leverage. A modest effort creates a large payoff—a more robust security profile.



Home Automation Ideas

  Home automation is much more than a one-trick pony. When you build on a strong foundation, such as an Solar Panel t hat powers your whole ...