Skip to content

This site is automatic translation of http://www.marcelosincic.com.br, original in portuguese

Azure Sentinel – New Security Product Now Available

Azure Sentinel had been in Preview for some time (since March) but was already proving to be a very interesting product https://azure.microsoft.com/pt-br/blog/azure-sentinel-general-availability-a-modern-siem-reimagined-in-the-cloud/?wt.mc_id=4029139

Its function is to analyze the data collected by Log Analytics and generate dashboards, reports and custom alerts based on Machine Learning.

In this first post we’ll talk about Sentinel’s initial setup and its cost.

Note: In a second article we will talk about Incidents (cases), Search, Notebook, Analysis and Strategic Guides.

How to Enable Azure Sentinel

To create an instance of Sentinel you must have Log Analytics (formerly OMS) enabled and running. If you don’t know it, you can see what we’ve covered earlier at https://msincic.wordpress.com/2018/12/10/operations-management-system-oms-is-now-azure-monitoring/

It is not necessary to do all the configuration of Log Analytics, it will depend on what you will analyze. For example, if you parse DNS but use Azure DNS, Office 365, Azure Activity, and other features that are already part of Azure, the data is parsed without the need for agents.

On the other hand, if you are going to analyze general security threats, AD login and logout, and environment security, you must have the agent installed on Windows or Linux to collect log data.

Once the Log Analytics workspace is created it is possible make the link.

1-Sentinel

With the open workspace it is possible to have an overview of the collected data, nothing very sophisticated but enough to keep up with what is being analyzed.

2-visao geral

By clicking on any of the summary items you can open the log of which generated the alerts or anomalies.

3-Detalhes

How to Define What Will Be Analyzed

In the Sentinel console you can see the tab "Connectors" where we have several connectors already created and available, some as preview and indicated which have already been linked.

4-Conectores

See in the last item that for each different connector the cost becomes effective, that is, depending on the number or type of connector will be charged data processing.

For each connector you must open the workbook and configure the connection, for example if Azure indicates the subscription and if Office 365 is the user to log in and capture the data. As each of the connectors has wizard is a very simple process to perform.

Consuming Reports and Dashboards

In the Sentinel tab see the option "Workbooks" where we can choose which dashboards we want to make available or create your own.

For example if I click on the Exchange Online connector I can view or save the workbook with reports.

5-Pastas de trabalho

In the case above see that the option of Save does not appear but Delete, since I have previously saved as one of the most used dashboards (workbook).

By clicking View we can see the details of the Identity analysis dashboard that provides login and security information for my environment.

6-Minha Pasta-1

6-Minha Pasta-2

6-Minha Pasta-3

6-Minha Pasta-4

The level and detail of the data gives us a true view of what is happening in a particular security item connected.

Sharing and Accessing Reports (Dashboards)

In the same tab of "Workbooks" change to "My workbooks" and you can see the ones you have previously saved or customized.

In this example 7 folders are already saved (1 is customized) with 31 templates. Folders are custom or already imported from templates, while the number of “31 templates” is because the same set of connectors has more than one folder, as is the case with Office 365 which has a set of 3 different reports.

7-Pastas de trabalho-Salvas

When accessing one of the reports you can see the button “Share” where we can generate a link and send to others or use for easy access.

8-Compartilhar

To pin to the Azure portal home panel a shortcut use the folder icon in the preview screen and the "Pin to panel" option as below

9-Pinar

How Much Does Azure Sentinel Cost

We know that Azure features are mostly charged, and Azure Sentinel already has its value disclosed at https://azure.microsoft.com/en-us/pricing/details/azure-sentinel/

The first option is to buy in packages of 100 to 500GB per day in advance model starting at the cost of $ 200 / day. Of course the early model is cheaper, but only useful if you consume 100GB per day, which would give you $ 7200 / month.

The second useful option for those who will analyze less than 100GB per day is the $ 4 per GB post-use or consumer payment model reviewed.

To know how much is being analyzed, see the second image in this article where we have the total data "ingested".

Important: If you collect Log Analytics data, the value must be summed as Log Analytics is a standalone solution.

Reserved Instance in Azure – Important Changes

For some time now we have available the ability to buy in advance an instance of virtual machine, called Reserved Instance.

Basically the process remains ( https://msincic.wordpress.com/2017/11/20/azure-reserved-instance-available-for-purchase/ ) but we have some news and alerts:

  1. VM Type Change
  2. Other Reservable Resources
  3. Change in billing method
  4. What is not included in a reservation
Possibility of Instance Change (VM profile)

This change is important, because in the first version (link above) it was not possible to change the type of VM.

The process to change was to request the refund of the already paid instance (remember that there was a penalty ) and redo with another type of VM, even from the family like D2 to D4.

To do this simply use the Exchange button on a booking and it will be possible to choose the new type of VM as below without the penalty of approximately 12% of fee.

image

However, obviously the cost of a D2 is different from a D4 and for this we have a table that can be used in the calculation to know the amount of difference that will be paid when switching between VM types at https://docs.microsoft.com/en-us/azure/virtual-machines/windows/reserved-vm-instance-size-flexibility?wt.mc_id=4029139

Other Resource Types In addition to VMs

In the early version IRs were just VMs, but now it is possible to do with various types of services. The following is the list of supported ones:

  • Reserved Virtual Machine Instance
  • Azure Cosmos DB reserved capacity
  • SQL Database reserved vCore
  • SQL Data Warehouse
  • App Service stamp fee

This list changes as new features may be added and is available at https://docs.microsoft.com/en-us/azure/billing/billing-save-compute-costs-reservations?wt.mc_id=4029139

Important: See the topic below for what is included in and out of RI.

Monthly Payment Method

Until 8 / September / 19 only advance payment from Enterprise Agreeement credits or credit card payment was possible.

It is now possible to pay monthly, ie every month will consume the amount discounted equal to the annual as if it were a monthly subscription and not annual. Best to understand is that the commitment remains annual but paid monthly rather than upfront .

The other rules remain the same, penalty in case of cancellation, change of VM type or service, etc.

For those who already have reservations, it will be necessary to wait for the purchase period, once it has been paid in advance and therefore the cancellation fee would be charged.

https://docs.microsoft.com/en-us/azure/billing/billing-monthly-payments-reservations?wt.mc_id=4029139

Important: The monthly payment has not changed the form of booking being annual, so there will be a penalty in case of cancellation.

Separate Charged Resources

A common misconception for customers who purchased RIs is that there are still other charges for VMs and features appearing on their statements.

What needs to be clear is that reservations refer only to computational resources and not to aggregate resources such as licenses, storage and network traffic.

For example, in the type of reservation for VMs that are the most common:

  • Included in RI : CPU, Memory, and Allocation
  • Not included in RI : Storage, network traffic, and OS licensing if AHUB was not used

The reason is that these non-included features are part of the subscription and are either shared or optional (as is the case with the Windows or SQL license) and there would be no limit to using only those specific reserves, besides being volatile other than the type of one. VM for example.

Conclusion

New features that can be reserved, flexibility to change, new billing, and the right understanding can bring substantial savings to those who have migrated services.

Azure File Sync – Optimizing Your File Server and Storage

Two more applications consume storage in IT environments:

  • Database – Because they contain analytical and indexed data we can use drill down techniques to separate analytical data from summary data for easier access and cost optimization.
  • File Server – Over the years, companies accumulate thousands of files, which is expensive and rarely clustered or tiered.

Tiering: Technology where data is separated according to performance rules on more expensive or cheaper disks.For example, underused files are on SATA disks, files with occasional access on SAS disks, and files that are accessed daily on SSD disks.

Let’s cover how to use Azure File Sync to create a tiering of data on a File Server to allow most accessed files to be stored locally and the oldest to cloud only.

Frequent Scenarios

The first scenario is to decrease the total size of space occupied by old files.

In this case we use the desired file date and free space settings to decrease the disk space that File Server occupies, freeing up for use with other needs.

The second scenario is distributed file server, where in each branch of the company you need to have a server to access the data.

In this example all servers replicate the same folder, which does not create local saturation problems, since the cache is only of recent files and controlled by the desired percentage of free space to be kept.

Azure File Sync Components
  1. Storage Account – A virtual storage where data will be stored
  2. Storage Account File Share – Folder within Storage Account to receive files to upload
  3. Azure File Sync Service at Market Place – Is the service and should be enabled, unlike other native services.However, despite being at Market Place AFS does not have a cost, it is just the inclusion of a service
  4. File Sync Service – Azure dashboard service where we can create groups, add servers and configure storage
  5. Registered Services (servers) – These are the servers that will be synchronized, where the files are stored and will be cached
  6. Sync Group – Form the list of servers that will receive the copy of the files to be copied and give access to the files in any location.
Creating a storage

This is the first and well-known step for those who already use Azure, since we need storage for everything.

armazenamento

To use AFS you do not need any additional configuration, you will be able to choose which region, storage type, and replication that best applies to your environment. Obviously some things need to be taken into account:

  • Account type involves maximum performance and will affect both download and upload when users use files
  • Replication is important if you will have servers in multiple locations / countries.
  • Hot or Cold layer involves performance directly as well as cost, as access is very slow on Cold disks and would not recommend for a solution like this.

Following is the need to create File Share where the files will go when synchronized, and the concept is the same as a common server:

compartilhamento

When synchronized, the files will appear first in the Synchronization folder and then in the main folder as we can see below.

syncstaging

Files Sync

Remember that the two screens above refer to the synchronization already completed, the first to see the files being copied and the second when the first synchronization has already finished.

Enabling Azure File Sync

Search the Marketplace for Azure File Sync or Azure Sync Service:

mktplace

mktplace-2

At this point you can choose to use an existing or new Resource Group, no matter which Resource Group Storage was created in, as it may have several other services assigned.

Creating the synchronization service

The creation of the sync group is quite simple, just indicate the signature, storage and shared folder defined earlier.

Servico

grupo sincronizacao

Registering File Servers

You may refer servers:

  • New servers that do not have files and include them in an already synchronized group so that it caches files that are already in the Azure Storage shared folder
  • Server with data where content will be copied to Azure and added

The first step is to install Azure PowerShell libraries (AZ) on the server, which can be done by following the steps on https://docs.microsoft.com/en-us/powershell/azure/install-az-ps?view=azps-2.6.0&wt.mc_id=4029139

Once you have Azure CLI installed, download and install the Sync Agent which is very simple to do.

AZFAgente

registerserver

After that you will be able to see the server in the Azure dashboard:

serverregistrado

No further configuration or configuration is required at this step as it is a simple agent operation.

Creating the Endpoint (Cache Servers)

This is where we really create the service and watch the magic happen!

Entering the sync group we created earlier and using the Add Endpoint or Add Endpoint option to include the server in the group we created.

Extremidade

Let’s look at the options that are listed:

  1. Path – This is the directory we want to be synchronized, remembering that if it is empty for an existing group it will download content as it is used. If it is a server that already contains files, these will be uploaded to Azure.
    Important: It is not possible to use the root drive (C :), but one part dial because of the system files.
  2. Free Volume Percentage – We do not define how much to use for caching, but how much space on the volume should be free. It may seem like an inverted calculation but it is not because of other files that the same disk contains. For example, if the volume is 100GB and contains other files totaling 40GB and we define that we want to leave 50% of the disk free, only 10GB will be used by cache (50% of 100GB = 50GB always free) and as the use of other files increases. that are not synchronized, less will have room for cache.
    Tip: Because of this difficulty, prefer to use a dedicated volume to make File Sync
  3. Cache only files accessed or modified ax di as – We saw that we have the option to preserve a percentage of the disk. But what if old files take up a lot of space won’t do much good. In this case of my example any file older than 60 days will automatically go to Azure and will be deleted on the server disk, gaining free space even if the cache percentage is still available.

Painel

At the end of this configuration it is possible to follow the synchronization by clicking on the server:

Server sync

Once synchronized, we can use the metrics panels below the screen to create alerts when errors or distortions occur:

Metricas

In my example I can use a rule that if the number of synced files is greater than 100 for upload within 15 minutes could be a mass change caused by improper copying or even malware .

System Center 2019 and Windows Server 2019 – Upgrade in place II

With the official launch of System Center 2019 last week we can now test the migration of the final version.

https://cloudblogs.microsoft.com/windowsserver/2019/03/07/coming-soon-microsoft-system-center-2019?wt.mc_id=4029139

New Version Policy

In the new version policy of System Center, there will be no Semi-Annual channels like Windows.

That is, you will have the 2019 version for approximately 3 years with the updates that usually occur 3 times a year.

This means that different from the first versions that were 1801 and 1807, henceforth we will no longer have that same type of nomenclature returning to the old versions model with updates (2019 UR 99).

Important: System Center Configuration Manager continues with the Semi-Annual channel

https://docs.microsoft.com/en-us/system-center/ltsc-and-sac-overview?wt.mc_id=4029139

Running the Upgrade

In the same document above, we see the support for in-place upgrade that is guaranteed until the last 2 versions.

This means that users of the 2012 R2 versions will need to first upgrade to 1801 and then to SC 2019.

Important: System Center Configuration Manager will have the different update rules depending on the chosen channel

Just as the upgrade from version 2016 to 1801 was quiet and I have already demonstrated here https://msincic.wordpress.com/2019/01/02/system-center-2019-and-windows-server-2019-upgrade-in-place/, the migration of 2019 was also quite satisfactory.

All of them only need to confirm the installation, with the exception of SCOM and VMM that you need to upgrade agents.

DPM did not upgrade because I currently use Microsoft Azure Backup which is a specialized subset for backup on Azure.

System Center Operations Manager (SCOM)

SCOM (3)

SCOM (2)

In the case of SCOM a change is now enabled by the interface in the "About", before it was necessary to do by PowerShell with the command Set-SCOMLicense .

SCOM (4)

Remember that in the case of SCOM it is necessary to authorize the upgrade of the agent to all servers shortly after installation. If it does not, communication will continue, but it will create constant warning alerts and new features can cause agents to fail.

System Center Service Manager (SCSM) and System Center Orchestrator (SCO)

Literally nothing needed to be done or changed, and so did Orchestrator.

Service Manager (1)

Service Manager (2)

System Center Virtual Machine Manager (SCVMM or VMM)

The VMM has already required a bit more work because it is necessary to review the accounts in the "Run-AS" that now limits local accounts and reinstall the agents.

In my case, I did the uninstall exercise to validate if just using the database would return and it worked!

VMM (1)

VMM (2)

VMM (3)

VMM (4)

Azure Virtual Datacenter (VDC) Part II-Basic Concepts

In the previous post we talked about the migration to Cloud

https://msincic.wordpress.com/2019/03/06/azure-virtual-datacenter-vdc-part-i-migration-as-is-and-to-be/

In this post we will understand the basic concepts, which are represented by this diagram:

image

Each part represents one of the pillars that support a Virtual Datacenter:

  • Encryption – All data trafficked within a datacenter where multiple clients stay must be protected so that one does not have access to the data of others. This involves communication, disk and traffic cryptography
  • Identity – A consistent identity model where customers can log in and view their objects with all available resources. In the case of Azure this is done by multi-tenant Active Directory (Multi Tenant). As already known in the market directory systems allow multiple companies to be hosted and share the database model and authentication with total isolation
  • Software-Defined Networks – How to host multiple clients if everyone wants to have the same range of IP and communicate over the same cable sets?
    This is the challenge of SDNs, to allow isolated traffic. In the past we did this with the VLAN feature but it was limited to 65535. Today this is done logically by using features like NVRE and others where network packets are tagged to who they belong to, similar to what the VLAN did but without the 32-bit limit.
    This allows multiple clients to have the same IP range as 10.0.0.0/24, since each virtual network receives a different TAG in the packets, with the encryption and identity guaranteeing the reliability in the delivery of the data packets
  • Compliance – It would not matter if migrating to a public datacenter would be stuck to patterns that only work there. Public clouds need to adopt the market standards for networks to communicate. This is not to say that the way Microsoft’s Machine Learning is coded is the same as the AWS Machine Learning, but rather that the basic part follows interoperability standards.
    For example, a VM in AWS can communicate over IP with a VM in Azure or Google Cloud because they all use the same protocols, even if a provider has different aggregate services.
    The same goes for an application in Moodle or SAP, if it is in Azure or AWS does not matter because they follow the identical network and communication standards (interchange).
    Because of compliance I can leave half of my servers in the local and the others spread out in 3 different public datacenters and all communicating normally.
  • Logging, Audit and Report – When migrating from a private (local) cloud to a public, I need to know the costs and make sure my data is safe and accessible only by my users.
    Here we are not dealing with log, audit and reports for the client, but rather the internal infra so that the provider is sure that there is no data leakage, who made each operation and report it when necessary.
    That’s why the cockpits of public cloud providers are gigantic. They need to control and be able to redo in any kind of failure that occurs.
    The first datacenters came from the concept of hosting , that is you took the servers from your rack at home to take to the provider where electricity, links and physical security are on their behalf. In this model all responsibility for communication, logical security and reporting is yours.
    In the public model a good part of the resources are allocated to control the resources, for example when creating the old Microsoft Azure Stack (currently discontinued) several VMs were created with the purpose of supplying the control items.
Conclusion

In this second post we talk about the basic components that make up a public cloud.

Feel secure when placing your data on these providers, they are prepared to ensure the privacy and security of your data.

Azure Virtual Datacenter (VDC) Part I – Migration AS IS and TO BE

When we work on a Public Cloud migration project and the design is geared towards Azure, the "AS IS" scenarios are very common.

AS IS

For those not started with this term, "AS IS" means to take as English, ie to copy the VMs from one environment to another without any change, using Azure as a hypervisor.

In general, the ISIS migration models are not efficient, because they consume a lot of resources in IaaS (VMs) that are expensive, not taking advantage of services that are cheaper (SaaS or PaaS). However, the advantage is that it is faster and does not require changes.

TO BE (or LIft and Shift)

Good migrations are the "TO BE", which in free translation would be "SERA" in the sense of transformation. The TO BE migration model is premised on using the services and not just migrating VMs.

TO BE Migrations are laborious and time-consuming, since this mapping involves understanding what’s inside the VMs.

The execution cost is much lower because SaaS and PaaS have large financial advantages when compared to the IaaS model.

For example, in AS IS an IIS server and another SQL Server will simply be copied the virtual and initialized disks. In the TO BE model, we will isolate each of the applications that IIS performs and create Web Plan for isolation and Web Services for each site, and in the case of SQL Server we would use the Database service (SaaS or PaaS).

Using Service MAP

The first step to doing a migration is to map what each VMs or physical server performs in the environment.

For this we use the Service MAP: https://msincic.wordpress.com/2018/07/03/azure-log-insights-service-map/

It will be possible to see the interconnections and services that each server uses between the environment and map which service we have to replace.

Understanding the Azure Datacenter Concept

To design a datacenter using VMWare, Hyper-V or KVM, it is necessary that the design of the hosts, network and other details be done by specialists in the hypervisor.

The same goes for Azure, we need to understand the different components to design a datacenter with its features.

For this, it is necessary to study, and much. It is also necessary to break the paradigms of physical datacenter and think about services.

One way to do this is to use Microsoft’s own Guide available at https://docs.microsoft.com/en-us/azure/architecture/vdc/

This guide has all the perspectives of a virtual datacenter, will help you understand the virtualization, network, security, services and lift and shift layers, that is, the transformation to a more efficient model.

To get started download the presentation available at https://aka.ms/VDC/Deck

Conclusion

It is not easy to make a correct migration, but it is possible and the result will be much better.

During the month we will explore the items that make up the VDC and you will see that it is possible to do this type of migration with new resources, more efficient and appropriate costs.

System Center 2019 and Windows Server 2019 – Upgrade in place

As known, System Center came out in its new version, now following the same concept of Branch (Current Branch) of Windows. From now on we will see the versions following the number that indicates the edition:

Roadmap

The 2019 version of the suite had no changes to main layouts or features, but adds several new features.

Today we have new 1801 version, System Center 2019 is version 1901 and expected launch date is March.

These resources can be viewed at the following link: https://thesystemcenterblog.com/2018/09/25/whats-new-in-system-center-2019/

System Center Configuration Manager Upgrade

SCCM since the 2016 version is upgraded as a native and automatic feature. It has always been very stable and easy to perform, being available in Administration -> Updates and Services:

Upgrade SC (10)

Once started, you can go through the top bar menu and follow the whole installation step by step:

Upgrade SC (1)

Remember that it is not possible to interact with the upgrade after starting, but in case you choose to leave the features disabled in the menu shown in the first image, choose Features to include one of the new ones.

Personally I always prefer to install upgrades without selecting features and then include the ones I want, so I can study the impact and real need for more components running on the server.

System Center Service Manager Upgrade

Also simple to complete, enter the SCSM media and it will enter the upgrade mode where you will select which of the local servers is being updated. Remembering that it is important to know the structure to choose the correct server function that is being updated, in my case the Management Server :

Upgrade SC (2)

Upgrade SC (6)

The update is very smooth, and at the end it is already running. The new self-service portal now offers the HTML5 experience without the need for additional components:

Upgrade SC (9)

System Center Operations Manager Upgrade

Microsoft has really learned how to make system upgrades with System Center transparent, fast, and efficient. The same goes for SCOM.

Similar to SCSM, just include the media and run the upgrade mode:

Upgrade SC (3)

Upgrade SC (8)

The Warning message on the screen above exists from previous versions. Because System Center installers do not ask for a key, in some it is necessary to insert the key later.

To enter the key, run the SCOM PowerShell and use the command, remembering that now the System Center installation key is the same for the entire suite since the 2012 version:

Set-SCOMLicense -ProductId ‘xxxxx’

Upgrade from System Center Orchestrator and Virtual Machine Manager

To upgrade the SCO I had to uninstall the server first. The reason in my case was the installation of an update in the middle of the year that was beta and with that the automatic upgrade is not possible.

In these cases, uninstall the server with the Retain Database option turned on, even though the Orchestrator SCVMM is similar:

After uninstalling the previous version, or even for a refresh, redo the installation with the option to use an existing database:

Upgrade SC (7)

Upgrade SC (5)

Upgrade SC (12)

This means that the installation of both System Center Orchestrator and Virtual Machine Manager ends with the same existing data.

In many cases, Orchestrator and Virtual Machine Manager for in the middle of the installation with a generic database error, with the message: " DBSetup.exe fails with unknown error 0x800A0E7A"

If this happens in your case, download and install SQL Server 2012 Native Client – QFE available at https://www.microsoft.com/en-us/download/details.aspx?id=50402

Upgrading Windows Server 2019 with System Center Services

On some of the servers, before upgrading Windows, I upgraded System Center.

That’s because System Center 2019 is compatible with Windows Server 2012 R2, but not vice versa. This means that it is more reliable first to upgrade the services and then the Operating System that is also compatible.

Upgrade SC (11)

Conclusion

Upgrading your System Center servers is stable, but be sure to always have a backup of the databases if a problem occurs in these phases.

It is also important to remember the rules of order, in general the Management Servers before the other functions.