Skip to content

This site is automatic translation of http://www.marcelosincic.com.br, original in portuguese

System Center 2019 and Windows Server 2019 – Upgrade in place II

With the official launch of System Center 2019 last week we can now test the migration of the final version.

https://cloudblogs.microsoft.com/windowsserver/2019/03/07/coming-soon-microsoft-system-center-2019/

New Version Policy

In the new version policy of System Center, there will be no Semi-Annual channels like Windows.

That is, you will have the 2019 version for approximately 3 years with the updates that usually occur 3 times a year.

This means that different from the first versions that were 1801 and 1807, henceforth we will no longer have that same type of nomenclature returning to the old versions model with updates (2019 UR 99).

Important: System Center Configuration Manager continues with the Semi-Annual channel

https://docs.microsoft.com/en-us/system-center/ltsc-and-sac-overview

Running the Upgrade

In the same document above, we see the support for in-place upgrade that is guaranteed until the last 2 versions.

This means that users of the 2012 R2 versions will need to first upgrade to 1801 and then to SC 2019.

Important: System Center Configuration Manager will have the different update rules depending on the chosen channel

Just as the upgrade from version 2016 to 1801 was quiet and I have already demonstrated here https://msincic.wordpress.com/2019/01/02/system-center-2019-and-windows-server-2019-upgrade-in-place/, the migration of 2019 was also quite satisfactory.

All of them only need to confirm the installation, with the exception of SCOM and VMM that you need to upgrade agents.

DPM did not upgrade because I currently use Microsoft Azure Backup which is a specialized subset for backup on Azure.

System Center Operations Manager (SCOM)

SCOM (3)

SCOM (2)

In the case of SCOM a change is now enabled by the interface in the "About", before it was necessary to do by PowerShell with the command Set-SCOMLicense .

SCOM (4)

Remember that in the case of SCOM it is necessary to authorize the upgrade of the agent to all servers shortly after installation. If it does not, communication will continue, but it will create constant warning alerts and new features can cause agents to fail.

System Center Service Manager (SCSM) and System Center Orchestrator (SCO)

Literally nothing needed to be done or changed, and so did Orchestrator.

Service Manager (1)

Service Manager (2)

System Center Virtual Machine Manager (SCVMM or VMM)

The VMM has already required a bit more work because it is necessary to review the accounts in the "Run-AS" that now limits local accounts and reinstall the agents.

In my case, I did the uninstall exercise to validate if just using the database would return and it worked!

VMM (1)

VMM (2)

VMM (3)

VMM (4)

Advertisements

Azure Virtual Datacenter (VDC) Part II-Basic Concepts

In the previous post we talked about the migration to Cloud

https://msincic.wordpress.com/2019/03/06/azure-virtual-datacenter-vdc-part-i-migration-as-is-and-to-be/

In this post we will understand the basic concepts, which are represented by this diagram:

image

Each part represents one of the pillars that support a Virtual Datacenter:

  • Encryption – All data trafficked within a datacenter where multiple clients stay must be protected so that one does not have access to the data of others. This involves communication, disk and traffic cryptography
  • Identity – A consistent identity model where customers can log in and view their objects with all available resources. In the case of Azure this is done by multi-tenant Active Directory (Multi Tenant). As already known in the market directory systems allow multiple companies to be hosted and share the database model and authentication with total isolation
  • Software-Defined Networks – How to host multiple clients if everyone wants to have the same range of IP and communicate over the same cable sets?
    This is the challenge of SDNs, to allow isolated traffic. In the past we did this with the VLAN feature but it was limited to 65535. Today this is done logically by using features like NVRE and others where network packets are tagged to who they belong to, similar to what the VLAN did but without the 32-bit limit.
    This allows multiple clients to have the same IP range as 10.0.0.0/24, since each virtual network receives a different TAG in the packets, with the encryption and identity guaranteeing the reliability in the delivery of the data packets
  • Compliance – It would not matter if migrating to a public datacenter would be stuck to patterns that only work there. Public clouds need to adopt the market standards for networks to communicate. This is not to say that the way Microsoft’s Machine Learning is coded is the same as the AWS Machine Learning, but rather that the basic part follows interoperability standards.
    For example, a VM in AWS can communicate over IP with a VM in Azure or Google Cloud because they all use the same protocols, even if a provider has different aggregate services.
    The same goes for an application in Moodle or SAP, if it is in Azure or AWS does not matter because they follow the identical network and communication standards (interchange).
    Because of compliance I can leave half of my servers in the local and the others spread out in 3 different public datacenters and all communicating normally.
  • Logging, Audit and Report – When migrating from a private (local) cloud to a public, I need to know the costs and make sure my data is safe and accessible only by my users.
    Here we are not dealing with log, audit and reports for the client, but rather the internal infra so that the provider is sure that there is no data leakage, who made each operation and report it when necessary.
    That’s why the cockpits of public cloud providers are gigantic. They need to control and be able to redo in any kind of failure that occurs.
    The first datacenters came from the concept of hosting , that is you took the servers from your rack at home to take to the provider where electricity, links and physical security are on their behalf. In this model all responsibility for communication, logical security and reporting is yours.
    In the public model a good part of the resources are allocated to control the resources, for example when creating the old Microsoft Azure Stack (currently discontinued) several VMs were created with the purpose of supplying the control items.
Conclusion

In this second post we talk about the basic components that make up a public cloud.

Feel secure when placing your data on these providers, they are prepared to ensure the privacy and security of your data.

Azure Virtual Datacenter (VDC) Part I – Migration AS IS and TO BE

When we work on a Public Cloud migration project and the design is geared towards Azure, the "AS IS" scenarios are very common.

AS IS

For those not started with this term, "AS IS" means to take as English, ie to copy the VMs from one environment to another without any change, using Azure as a hypervisor.

In general, the ISIS migration models are not efficient, because they consume a lot of resources in IaaS (VMs) that are expensive, not taking advantage of services that are cheaper (SaaS or PaaS). However, the advantage is that it is faster and does not require changes.

TO BE (or LIft and Shift)

Good migrations are the "TO BE", which in free translation would be "SERA" in the sense of transformation. The TO BE migration model is premised on using the services and not just migrating VMs.

TO BE Migrations are laborious and time-consuming, since this mapping involves understanding what’s inside the VMs.

The execution cost is much lower because SaaS and PaaS have large financial advantages when compared to the IaaS model.

For example, in AS IS an IIS server and another SQL Server will simply be copied the virtual and initialized disks. In the TO BE model, we will isolate each of the applications that IIS performs and create Web Plan for isolation and Web Services for each site, and in the case of SQL Server we would use the Database service (SaaS or PaaS).

Using Service MAP

The first step to doing a migration is to map what each VMs or physical server performs in the environment.

For this we use the Service MAP: https://msincic.wordpress.com/2018/07/03/azure-log-insights-service-map/

It will be possible to see the interconnections and services that each server uses between the environment and map which service we have to replace.

Understanding the Azure Datacenter Concept

To design a datacenter using VMWare, Hyper-V or KVM, it is necessary that the design of the hosts, network and other details be done by specialists in the hypervisor.

The same goes for Azure, we need to understand the different components to design a datacenter with its features.

For this, it is necessary to study, and much. It is also necessary to break the paradigms of physical datacenter and think about services.

One way to do this is to use Microsoft’s own Guide available at https://docs.microsoft.com/en-us/azure/architecture/vdc/

This guide has all the perspectives of a virtual datacenter, will help you understand the virtualization, network, security, services and lift and shift layers, that is, the transformation to a more efficient model.

To get started download the presentation available at https://aka.ms/VDC/Deck

Conclusion

It is not easy to make a correct migration, but it is possible and the result will be much better.

During the month we will explore the items that make up the VDC and you will see that it is possible to do this type of migration with new resources, more efficient and appropriate costs.

System Center 2019 and Windows Server 2019 – Upgrade in place

As known, System Center came out in its new version, now following the same concept of Branch (Current Branch) of Windows. From now on we will see the versions following the number that indicates the edition:

Roadmap

The 2019 version of the suite had no changes to main layouts or features, but adds several new features.

Today we have new 1801 version, System Center 2019 is version 1901 and expected launch date is March.

These resources can be viewed at the following link: https://thesystemcenterblog.com/2018/09/25/whats-new-in-system-center-2019/

System Center Configuration Manager Upgrade

SCCM since the 2016 version is upgraded as a native and automatic feature. It has always been very stable and easy to perform, being available in Administration -> Updates and Services:

Upgrade SC (10)

Once started, you can go through the top bar menu and follow the whole installation step by step:

Upgrade SC (1)

Remember that it is not possible to interact with the upgrade after starting, but in case you choose to leave the features disabled in the menu shown in the first image, choose Features to include one of the new ones.

Personally I always prefer to install upgrades without selecting features and then include the ones I want, so I can study the impact and real need for more components running on the server.

System Center Service Manager Upgrade

Also simple to complete, enter the SCSM media and it will enter the upgrade mode where you will select which of the local servers is being updated. Remembering that it is important to know the structure to choose the correct server function that is being updated, in my case the Management Server :

Upgrade SC (2)

Upgrade SC (6)

The update is very smooth, and at the end it is already running. The new self-service portal now offers the HTML5 experience without the need for additional components:

Upgrade SC (9)

System Center Operations Manager Upgrade

Microsoft has really learned how to make system upgrades with System Center transparent, fast, and efficient. The same goes for SCOM.

Similar to SCSM, just include the media and run the upgrade mode:

Upgrade SC (3)

Upgrade SC (8)

The Warning message on the screen above exists from previous versions. Because System Center installers do not ask for a key, in some it is necessary to insert the key later.

To enter the key, run the SCOM PowerShell and use the command, remembering that now the System Center installation key is the same for the entire suite since the 2012 version:

Set-SCOMLicense -ProductId ‘xxxxx’

Upgrade from System Center Orchestrator and Virtual Machine Manager

To upgrade the SCO I had to uninstall the server first. The reason in my case was the installation of an update in the middle of the year that was beta and with that the automatic upgrade is not possible.

In these cases, uninstall the server with the Retain Database option turned on, even though the Orchestrator SCVMM is similar:

After uninstalling the previous version, or even for a refresh, redo the installation with the option to use an existing database:

Upgrade SC (7)

Upgrade SC (5)

Upgrade SC (12)

This means that the installation of both System Center Orchestrator and Virtual Machine Manager ends with the same existing data.

In many cases, Orchestrator and Virtual Machine Manager for in the middle of the installation with a generic database error, with the message: " DBSetup.exe fails with unknown error 0x800A0E7A"

If this happens in your case, download and install SQL Server 2012 Native Client – QFE available at https://www.microsoft.com/en-us/download/details.aspx?id=50402

Upgrading Windows Server 2019 with System Center Services

On some of the servers, before upgrading Windows, I upgraded System Center.

That’s because System Center 2019 is compatible with Windows Server 2012 R2, but not vice versa. This means that it is more reliable first to upgrade the services and then the Operating System that is also compatible.

Upgrade SC (11)

Conclusion

Upgrading your System Center servers is stable, but be sure to always have a backup of the databases if a problem occurs in these phases.

It is also important to remember the rules of order, in general the Management Servers before the other functions.

Operations Management Suite (OMS) is now Azure Monitoring

For some time OMS has been a tool that always aboard clients and events.

It is a very good product, with rich analysis and has evolved a lot in the last year, becoming the product that many think will replace System Center in the future.

What has changed in the interface?

The previous interface was simpler and in a portal the part as it is in the post below:

https://msincic.wordpress.com/2017/10/15/acquiring-and-licensing-the-azure-who-operations-management-suite/

Now the interface is integrated into the Azure panel, allows you to create new dashboards easily. In addition, it is possible to individually access each of the monitors.

image

image (1)

With this integration into the Azure interface it has become much easier and more functional.

And how was the licensing?

In the post where we had already approached the OMS we talked about the acquisition that was complex since each module was part of a bundle, and each bundle if solutions was separate payment. There was the option to buy per node or log upload, but there were limitation of solutions and modules in the upload model.

Now it’s much easier, there’s only one charging mode that is uploaded.

That is, you can now pay for the size of the logs you send, which is much more practical and simple!

https://azure.microsoft.com/en-us/blog/introducing-a-new-way-to-purchase-azure-monitoring-services/

image (2)

If you do not use Log Insights because you do not understand how to pay, it has now been simple and much cheaper!

Microsoft ATA-Recovery and Migration

We have already talked about Microsoft ATA (Advanced Threat Analytics) at https://msincic.wordpress.com/2018/02/26/microsoft-advanced-thread-analytics-ata/

Now there was a major upgrade with version 9 that made the ATA lighter in demand for features and display of the reports.

However, during the migration it is possible that connection losses to MongoDB occur and it is necessary to do the backup and restore.

The same process may be required when switching ATA servers.

Important: The Windows Security Log data is sent to Machine Learning to generate the incidents and alerts, but are hosted locally. So if you lose the server you will no longer have the reports and incidents already registered.

Performing ATA Backup

To back up the ATA configuration, use the copy of the SystemProfile_yyyymmddhhmm.json file located in the ATA installation folder in a Backup subdirectory along with the last 300 copies of the data.

This SystemProfile file is the MongoDB database in JSON format, eliminating the need to back up from Atlas or other MongoDB specific administration tool. This is very good, since it is not common to know MongoDB administration.

To work, you must have the certificate copy used for JSON file encryption, which is generated during installation (Self-signed).

The certificate copy only needs to be done once, open the MMC console with the Certificates snap-in, and find the ATA Central certificate certificate in the People certificates area on Local Machine .

With these steps we have the backup of the server configurations that are the JSON and the certificate. But what about ATA data?

To back up the ATA it is necessary, as already mentioned, to know the MongoDB tools and maybe you should think about whether you need them once they have been solved.

If you need to keep alerts and incidents, follow the document at https://docs.mongodb.com/manual/core/backups/ on how to back up the database.

Performing ATA Restore

The restore part of ATA in a new server or configuration of a new version is a bit more complicated than the backup that is quite simple.

You must first import the certificate exported in the previous step into the same tree that you did in the previous step.

You then need to reinstall the new ATA server with the same name and the previous IP, and at the time it requests the certificate, disable the Create Self-signed option to choose the original certificate.

In sequence we need to stop the Centro ATA service so that we can open MongoDB and import the JSON file with the following commands:

  • mongo.exe ATA
  • db.SystemProfile.remove ({})
  • mongoimport.exe –db ATA –collection SystemProfile –file "<JSON File> –upsert

Note: The first command opens the instance, the second removes the empty settings, and the third one imports the new configuration.

It is not necessary to re-create the Gateways because they are mapped automatically when you restore the settings.

If you have backed up the MongoDB database, follow the base restore procedure before restarting the ATA service.

Reference: https://docs.microsoft.com/en-us/advanced-threat-analytics/disaster-recovery

Windows EOL and SQL 2008-Extension Options

As it is already known, the product lifecycle of Microsoft for 2019 include Windows and SQL 2008 RTM and R2.

image (1)

image

Source: https://support.microsoft.com/en-us/lifecycle/search

Why is it important?

This is a typical problem in large enterprises, controlling the product support lifecycle that is implemented.

This matter is not of less importance, since having the support finished implies:

  • New security threats, even those involving software breaches, are no longer available for expired systems
  • New features in new products have no guarantee of operation on expired products

The first item is very important. Imagine that your company is vulnerable to an attack like many we’ve seen, because only ONE SERVER in your environment is expired !!!

What do I do if I have products that expire?

Obviously the best option is to migrate ("TO-BE"), but we know that it is not always possible. What can help is to use products such as the Log Insights Service Map ( http://www.marcelosincic.com.br/post/Azure-Log-Insigths-Service-Map.aspx ).

But for those who can not upgrade, one option is to buy support via Premier for another 3 years, which is not cheap but it is possible to negotiate through your Microsoft account team.

The cost to extend support PER YEAR is equivalent to 75% of full software in the most current version.

However, Microsoft has offered a very interesting option that is to migrate to Azure "AS-IS" !!!!

That’s right, anyone migrating Azure to Windows 2008 and SQL Server 2008 will not have to worry as they will have free support for an additional 3 years.

https://azure.microsoft.com/en-us/blog/announcing-new-options-for-sql-server-2008-and-windows-server-2008-end-of-support/

We need not even argue that it is a strategy to increase the use of Azure, but very good financially for whatever workload it has.

tela1