Skip to content

This site is automatic translation of http://www.marcelosincic.com.br, original in portuguese

Integrating SCOM with System Center Advisor

System Center Advisor is a very good tool for monitoring of environments because it has dynamic rules and totally free. For those who still don’t know or want to know details, follow the links below:

How to integrate System Center Advisor with Operations Manager?

SCOM allows you to monitor your entire environment, since video camera equipment (with SNMP) to mainframe (with management packs), but its rules are based on preselected behaviors often reactive.

On the other hand, the Advisor is based on Best Practices with little more than 350 intuitive and preventive rules.

Join the capabilities of the two products is the desired and simple to be done. The first step is from the console of the SCOM "Administration-> System Center Advisor – > Advisor Connector" and pass the data of your account in the Advisor:

SNAG-0000

From your login, SCOM will ask you to select your account in the Advisor, since it is possible to have multiple:

SNAG-0001

The next step is to select the servers that will be monitored by the Advisor in conjunction with SCOM:

SNAG-0002

After the integration, the console of the Advisor will be able to see that the SCOM is integrated in the Servers tab:

SNAG-0009

How to view the data of Advisor in SCOM?

This preview is automatic, because the Advisor will create views and status to the SCOM, with two main views.

The first view that may be displayed are built-in agents States:

SNAG-0006

And the second view allows you to view the alerts generated by the Advisor and imported by SCOM:

SNAG-0005

Finally, add the two products is simple and functional, without generating costs.

If you already have the SCOM, integrate System Center Advisor and take advantage of joint features of these two products.

New free Ebook: Microsoft System Center: Network Virtualization and Cloud Computing

This eBook explains in detail how to implement SDN (Software Defined Network) which is one of the technologies that is expected to grow much in the next few years.

The book is very good, what I could see is that is separated into conceptual part, several practical examples (2 VMs with the same Range, 2 VMs with the same IP, etc) and also a section teaching step by step how to configure hosts and VMM.

image[4]

Using Fixed IP in Virtual Machines on Windows Azure

A new feature that became available in new versions of PowerShell for Windows Azure are the commands "StaticVNetIP". You can download the new version at http://www.windowsazure.com/pt-br/downloads/online #cmd-tools

These commands allow you to attach the IP within the range of virtual network that you have already defined, thus allowing can ensure the IP of each VM without the need to make the "Start" in the fixed order every time.

Step 1: know the risks and Manage Their IPs

Before we begin, it is important to note that there is support if problems arise (http://msdn.microsoft.com/en-us/library/windowsazure/jj156090.aspx#BKMK_IPAddressDNS):

"Use DHCP-leased addresses (this is mandatory — static addresses are NOT supported)

Therefore, before you begin to designate fixed IPs their VMs, remember to keep a list of IPs defined!

In addition, do not use IPs that are not in the range of your virtual network. For example, my network has the range to 254 10.0.1.4 and if I fix the IP 10.0.2.4 to a VM, it will be incommunicado and need to be excluded.

image

 

Step 2: Register the Signature in PowerSell

This step is permanent, and simply run the command Add-AzureAccount which will open a window of authentication and import the data from your signature:

Capture

To verify that imported successfully use the command Get-AzureSubscription that returns the registered data:

image

If you need to remove a subscription that has used in the past to test the command Remove-AzureSubscription is indicated. If necessary, you will need to reset your default signature, the command below will reset the default:

image

 

Step 3: Register the IP of Each VM

To register the IPs remember what was commented at the beginning, it is necessary that they are in the range of virtual network that you have defined, otherwise the VM can no longer be accessed and will be incommunicado.

The command you use to secure the IP does not work with strings, the first step is to use the command Get-AzureVM to return in a variable the PermanentID the VM you want:

image

The above command searches for the VM "W2012-Exch-3" in the catalog and returns the ID, and the command Set-AzureStaticVNetIP below the fixed IP:

image

Note: you can use the "pipe |" to run the commands on the same line if desired

However, note that the above command was not confirmed, just like that. The correct way is to use the Update-AzureVM in sequence to confirm the change, as a commit.

Thus, the sequence of commands to change the VMs would have been like the example below:

image

Note that in this example 3 different VMs have had their fixed and IPs is possible with the command Get-AzureStaticVNetIP see if the VM set the desired IP:

image

Finally, while checking the network scope in Azure, you can see that the machines were fixed IP resumed:

ListaIPs

Online Training About SDN (Software Defined Network) with Windows and System Center

This event is a response to important question: What is SDN?

Enjoy!!!!

 

Software-Defined Networking with Windows Server and System Center Jump Start

Free online event with live Q&A with the networking team: http://aka.ms/SftDnet

Wednesday, March 19th from 8am – 1pm PST

Are you exploring new networking strategies for your datacenter? Want to simplify the process? Software-defined networking (SDN) can streamline datacenter implementation through self-service provisioning, take the complexity out of network management, and help increase security with fully isolated environments. Intrigued? Bring specific questions, and get answers from the team who built this popular solution!
Windows Server 2012 R2 and System Center 2012 R2 are being used with SDN implementations in some of the largest datacenters in the world, and this Jump Start can help you apply lessons learned from those networks to your own environment. From overall best practices to deep technical guidance, this demo-rich session gives you what you need to get started, plus in-depth Q&A with top experts who have real-world SDN experience. Don’t miss it!

Register here: http://aka.ms/SftDnet

Using Hyper-V Replica Part II – Best Practices for RTO and RPO

In the first post about Hyper-V Replicates we discuss the advantages versus replica of storage and how to start the configuration and replica http://msincic.wordpress.com/2014/01/18/using-hyper-v-replica-part-1-advantages-and-first-replica/

In this second post we will discuss how the RTO and RPO are important and how Hyper-V Replica fits these concepts.

Recovery Time Objective and Recovery Point Objective

Basically the terms indicate the RTO and RPO objectives that a disaster solution must comply with:

  • RTO – maximum time to replace the service in production
  • RPO — maximum data that can be "lost" between the disaster event and the restored environment

A good example of how these values relate to each other and what they mean can be explained in the chart below:

image

In the above example we can "see" clearly the RTO and RPO:

  • RTO was 5 hours and 3 minutes, between the 05:15 and the 10:18
  • RPO was 3 hous and 15 minutes, between the 02:00 and the 05:15, since the backup was performed the 2 a.m.
How to determine the RTO and RPO

These values are determined by a plane which is called the DRP (Disaster Recovery Plan) which is orchestrated by consultancies specialized in this type of process. It is usually done when an organization is upgrading its datacenter and reviewing their data recovery policies or redundant datacenter mounting.

The process of survey data is based on interviews and data from the it environment and, among other things, collects:

image

Because Hyper-V Replica is a great option

The backup process is one of the ways that the RPO and RTO can be met, but the normal restore practices often preclude taking into account the time that is lost between the last backup and the failure (RPO) and the time required to restore a server from backups (RTO).

With Hyper-V replicates the RTO time is minimal, since the replicas keep the virtual machine (VM) in redundancy environment integrates.

And the RPO?

In a backup environment the RPO is easily calculated and maintained. For example, if the RPO of the CRM application has maximum loss calculated in 30 minutes, can make incremental backup every 15 or 30 minutes.

In the case of Hyper-V Replicates this time is not determined in a simple way, since the time of replication (Replication) of each VM indicates the range and not the desired period of protection. It would be nice to have an option where you could be indicated what the maximum time that a replica can be outdated.

A second important item is to take into account the Group of an application, for example more than one server that forms the same application and needs to be with the synchronized replica. How Hyper-V Replica does not have the concept of service group, we have no way to ensure the integrity of the entire application.

Another difficulty in Hyper-V Replica is the low number of replica range options (Windows 2012 every 5 minutes, Windows 2012 R2 every 30 seconds, 5 minutes or 15 minutes):

image

Imagine a cluster with 80 VMs, each VM has different impact on business or technical requirements. Of these 80 some VMs are web servers that can be replicated once a day, others are application servers that only need to be replicated when you suffer some sort of update and finally we have the servers that need to be replicated continuously.

How to configure different RPO?

A practice that can be adopted in a simple way, is to put the machines in criticity groups and configure using the 3 Windows Windows 2012 R2 replica (30 seconds, 5 minutes and 15 minutes).

The problem is that if the VM that is replicated every 30 seconds, for example a database and the environment for WAN redundancy, link consumption will be too high and the other VMs will be in range of replica and all replicas will occur simultaneously. With that, the RPO will be impaired for all VMs criticism and very low for the machines don’t criticize.

A good practice in this case is to configure VMs with RPO greater than 2:0 to be replicated manually through PowerShell below:

Resume-VMReplication MaquinaVirtual-Resynchronize – ResynchronizeStartTime "05:00 am 1/8/2012"

This command can be executed by the Task Scheduler or using the Orchestrator with schedule embedding the command.

In the example above, the VMs or database information such as File Server would be with the configuration of the Hyper-V itself every 5 or 15 minutes. The static VMs could be configured with replication and manual tasks or scheduled and recurring runbook replicate promptly as the Group of criticality.

Conclusion

This second post discussed how to achieve the RTO and RPO.

The next post will address the commands and the sequence of PowerShell commands that can be run as a script or with Runbook in Orchestrator.

Using Hyper-V Replica Part I – Advantages and First Replica

Part II of Hyper-V Replica about RTO and RPO planning is available in http://msincic.wordpress.com/2014/01/18/using-hyper-v-replica-part-ii-best-practices-for-rto-and-rpo/

Although very reported as news on Windows Server, Hyper-V 2012 Replica is not being so used by professionals as expected. Most likely we have ignorance and the restriction to be a new technology, which is only natural.

However, one of the ways today used for replica of VMs in Hyper-V and create several problems is the replica storage, i.e. the replication that occurs between the arrays in cases of redundancy datacenter (DR).

The table below lists some reasons why Hyper-V option a better Replica replica of storage:

 

Storage

Hyper-V Replica

Performance of Replica

Better Performance by use dedicated algorithm

Good performance by using delta replication of VHDX changes only

Consistence

Ensures data consistency until the replica

Replicates NTFS-based data, ensuring complete data block replica already committed

Allows

Active/Active

Automatic with Live Migration

RPO

Allows replica or even continuous scheduling

Allows scheduling initial replica, delta is each five minutes

RTO

Need for online state and hosts need to have disk services restarted

Immediately without manual intervention, just stop/start action on VM

Replica New VMs

Need to create manually in the DR and change the configuration Xml

Replica that is created automatically when you select the replica in Active cluster

Admin Tools

Storage console

Console do Cluster/Hyper-V

Specialized Level

Advanced storage concepts

Standard Hyper-V and Microsoft Cluster

Replica Cancel

Need cancel and delete data in replica

Disable Replica in the VM (button)

Invert Replica Source

Restart replica

Permit invert direction

Cluster Mode

Active/Passive

Active/Active

Recovery Action

Recreate Replica algorithm

Invert Direction

The biggest problem of the replica of storage for Hyper-V is that the replicated LUN in the DR site is offline. Therefore, you can not change or even see in Hyper-V VMs in the DR site, once the LUN is not accessible and can only stay in the moment of a new operation.

Already Hyper-V swaps Replica VMs without any additional step, including the reversal (reverse secondary primary). However, we will talk about that in another post. Let’s focus on the moment of the first replica.

There are two forms of the first replica be performed without using the link between the Web sites of the following example:

image

The first way is to make the local Hyper-V configuration Replica and expect high school have all VMs ready.

This method has the disadvantage of mounting the storage and servers in two moments, which may endear the service and in many cases there is no space or energy resources enough.

The other way is to do this by using the Hyper-V wizard Replica choosing export the VM.

To do this, when configuring the replica of a VM choose "Send initial copy using external media" and set a location to export the file as below:

image

The next step is to import the VM on the host where it was created. Note that the VM is created at the end of the wizard above on the host destination, but without the files and without activating the replica:

Imagem1

Choose the location created by the wizard and wait for the import:

Imagem3

Complete this item in the destination server status will be Warning and on the source server Normal indicating that is ok.

Imagem4

The next step is to click on the origin server on VM and use the option Summarizes Replica so that it starts the sync copy.

An important tip is that Hyper-V Replica works creating a snapshot and sending the snapshot file from source to destination, so don’t take too long to do the initial synchronization because you can have space problems and performance due to the use of a differencing disk of the snapshot.

In the next posts we will address best settings and how to assemble a Hyper-V environment Replicates.

Microsoft Assessment and Toolkit 9.0 (MAP) Released

Yesterday was released to download the MAP 9.

The MAP is an essential tool for assessing Windows client migration, Windows Server, Windows Azure, database consolidation, server consolidation, virtualization, licensing and workload.

Follow the descriptive of the new features:

New Server and Cloud Enrollment scenario helps to simplify adoption

Server and Cloud Enrollment (SCE) is a new offering under the Microsoft Enterprise Agreement that enables subscribers to standardize broadly on one or more Microsoft Server and Cloud technologies. The MAP Toolkit 9.0 features an assessment scenario to identify and inventory SCE supported products within an enterprise and help streamline enrollment.

New Remote Desktop Services Licensing Usage Tracking scenario creates a single view for enterprise wide licensing

With an increase in enterprises deploying Remote Desktop Services (RDS) across wider channels, RDS license management has become a focus point for organizations. With the new RDS Licensing scenario, the MAP Toolkit rolls up license information enterprise-wide into a single report, providing a simple alternative for assessing your RDS licensing position.

Support for software inventory via Software ID tags now available

As part of the Microsoft effort to support ISO 19770-2, the MAP Toolkit now supports inventory of Microsoft products by Software ID (SWID) tag. SWID enhanced reports will provide greater accuracy and assist large, complex environments to better manage their software compliance efforts by simplifying the software identification process and lowering the cost of managing software assets.

Improved Usage Tracking data collection for SQL Server Usage Tracking scenarios

As part of our ongoing improvement initiatives, Usage Tracking for SQL Server 2012 has been enhanced to use User Access Logging (UAL). UAL is a standard protocol in Windows Server 2012 that collects User Access information in near real time and stores the information in a local database, eliminating the need for log parsing to perform Usage Tracking assessments. UAL vastly improves the speed and helps to eliminate long lead times for environment preparation associated with running Usage Tracking assessments.

Download the MAP Toolkit 9.0 now!

Follow

Get every new post delivered to your Inbox.