Pular para o conteúdo

This site is automatic translation of http://www.marcelosincic.com.br, original in portuguese

Microsoft Sentinel–Automations do not run

A very common mistake when I see Sentinel implementations on clients is not running the automations.

To understand the problem, it is interesting to interconnect with Power Platform applications such as Power Apps, where the user needs to authorize the Office 365 account for the app to run. This is done on first run and Power Apps or Power Automate will save the connection user.

The same happens with Sentinel automations, they are not necessarily linked to an Automation Account and with that it is necessary to authenticate all connections !

How do I know if I have unauthenticated automations?

Open Sentinel and click Automations.

img1

When opening click on the option that will appear at the top of the list on the right side “API Connections”.

Filter for automations that have errors.

img2

For automations that have an error in connection, open their properties and go to the “Edit API Connection” option and voilá found the problem  Winking smile

img3

Remembering that the API connections can be different, Office 365 if sending an email, key for applications, SAS for storage and other specific data that has been used in automation and needs to have the credential.

Now you can see that the same automations with error will appear as “Connected” indicating that they are now working.

img4

System Center 2022 Launch – Still Worth It? Will it be discontinued?

The first time I received the MVP award was in the System Center category, which later changed to Cloud and Datacenter Management (CDM).

With the exponential growth of public clouds, on-premises environments have also become integrated with the resources available in public and/or migrated clouds.

So I constantly get the question “Is System Center going to die?” and even claims “System Center has been discontinued”.

With the release of System Center 2022 on April 1st we return to these questions https://cloudblogs.microsoft.com/windowsserver/2022/04/01/system-center-2022-is-now-generally-available?WT.mc_id= AZ-MVP-4029139

So let’s go to some questions and I’ll use a presentation I made at MVPConf.

What led to these conclusions?

  • Semi-annual updates were discontinued (1801, 1909, etc), updates followed the previous model of Update Rollups every 12 to 18 months and new versions every 3 or 4 years
  • Configuration Manager had its last version 2012 R2 as the last one that was part of the System Center suite and became Enpoint Manager in the Intune family
  • Service Manager had a communication from the product team in 2018 where they stated that the product would not be discontinued
  • Operations Manager did not have an integration with Azure Monitor
  • Virtual Machine Manager did not support new Hyper-V features and limited Azure support
  • Orchestrator with few integration packs for 3rd partners

Configuration and Endpoint Data Protection Manager

  • Moved from the System Center family to the Endpoint Management family
  • Integration with Intune and new Azure features like Analytics (Log and Desktop)
  • Possibility of using roles directly on the web (CMG)
  • Licensing has been integrated into Microsoft 365, Enterprise Mobility Suite (EMS), Intune add-on and CoreCal Bridge licenses

Conclusion : The product was not discontinued nor became a new family to “detach”, but a repositioning for the Windows management team.

Operations Manager

  • Management Packs have all been updated for new products (Windows Server 2019, Exchange, SharePoint, etc)
  • A Management Pack for Azure was made available that allows for all monitoring and dashboards, received integration with Log Analytics, which feeds data for use in Azure Monitor
  • Reduces costs and has better performance in alerts for on-premise servers, when the environment is integrated with Azure Monitor
  • Project Aquila will allow you to use SCOM as a SaaS (source: ZDNET and Directions)

Conclusion : It remains an important tool for on-premise environments. For cloud environment, Azure Monitor and others are indicated.

Virtual Machine Manager

  • It’s being updated with new Windows 2019 features, but the timeline between new Windows features and inclusion follows Update Rollups, 12-18 months
  • It is still very important because of the resources in the Hyper-V Cluster and monitoring for those who use it.
  • Windows Admin Center comes with many of the features that VMM has, but the VMM wizards are superior.

Conclusion : For large clusters VMM is indispensable, but for managing segregated Hyper-V servers the Admin Center is a good option.

Data Protection Manager

  • It kept the main backup features of only on-premise Microsoft products (SQL, Hyper-V, Exchange, etc.) and VMWare. There is no provision for inclusion for third-party products
  • Does not support Azure services, each Azure service has its own backup tools. It accepts agents in Azure VMs, however the download cost must be taken into account
  • It has the free version Microsoft Azure Recovery System (MARS) which is a subset of DPM without tape support

Conclusion : For on-premises Microsoft environments or Azure VMs for local disks or tapes is still important, but Azure environments utilize the native features of each service.

Service Manager

  • Self-Service Portal Now in HTML 5
  • Supports integration with BMC, ServiceNow and others, but some connectors are paid (3rd SW)
  • Stayed true to the ITIL v3 model
  • Workflow construction has been improved including a more user-friendly interface and more Orchestrator integration features

Conclusion : It is a suite tool that has received few advances and has kept its dependence on Orchestrator, which makes administration more complex. But as part of the suite it is financially justifiable as a whole.

orchestrator

  • Integration Packs have all been updated for new products (Windows Server 2019, Exchange, SharePoint, etc)
  • 3rd SW Integration Packs not all have updates, most are paid
  • Now supporting PowerShell v4 allows you to create new functionality by code, which removes the limitations of Integration Packs

Conclusion : It remains an important tool for on-premise environments. For cloud environment, Azure Monitor and others are indicated.

Alternatives to System Center

With the advancements of integrated tools like Hybrid using Azure Arc and Azure Automation, you will be able to extend the same capabilities on on-premises servers equivalent to System Center.

image

Microsoft Defender for Cloud Secure Posture

Already in private preview a few weeks ago, today Microsoft released to the public (GA) the Defender for Cloud changes to the security posture.

Why rename from Secure Score to Secure Posture?

Saying you’re 100% safe doesn’t mean you’re not really at risk, and saying you’re 100% safe doesn’t mean you don’t have vulnerabilities.

Let’s use a driver and his vehicle as an analogy. This driver is careful and has all the maintenance up to date. He drives prudently and would rarely commit an infraction. But how many times have we seen someone lose control because of a flat tire, a hole or slip on the track, engine defect, wheel or axle breakage, brake failure and even a sudden feeling of discomfort?

That is, your SECURITY POSTURE indicates that you follow the recommendations to avoid having known problems, but a security breach of an operating system, application or device is not an item that you can predict, only remedy…

So renaming the “Secure Score” to “Secure Posture” shows that you follow and have the items UNDER YOUR CONTROL remedied and controlled. But you are still subject to external problems, see recent example of iOS vulnerabilities, Log4j, SolarWinds, Mikrotik, etc.

Secure Posture now includes AWS and GCP

Previously, it was already possible to enter with AWS and GCP accounts, but they did not reflect on the Score and did not allow the application of compliance rules. Thus, it was necessary to look separately and extract data from more than one platform to generate a single compliance report.

What has changed is that Secure Posture shows all the Clouds built in and you will see that when you join Defender for Cloud starting today!

p1p2p2B

What are the policies and rules evaluated?

This is a very interesting item, as the compliance rules you’ve already applied to Azure will automatically be duplicated to represent the same security posture in AWS.

See below that the same rule sets applied to Azure subscriptions are now applied and adapted to AWS.

p3p4

However, it is important to point out that only if it is possible to export the compliance reports, it is not possible to generate the audit reports (Audit reports). The reason is that audit reports are based on datacenter security rules and not just on what is services, ie provider and customer control. Because of this, each provider needs to have its own reports attesting to the logical and physical security of the infrastructure.

AWS Account Onboarding

Onboarding your AWS account is not a trivial process as it involves creating an object in CloudWatch and configuring permissions. But doing the process through Azure is simple and it has the step by step of what must be done and validates at the end.

p5

When adding an AWS account, you can choose to assess posture only or automatically install Azure Arc on VMs and capture logs with Log Analytics.

p6

It is important to remember that you will be able to use Azure policies and initiatives now on AWS, so the same custom requirements you have will be evaluated in both environments.

Conclusion

For customers with a multicloud environment, it will now be possible to have a single view of the posture based on custom rules or unique regulations.

Official Announcement: Security posture launch

Azure Sentinel–MITRE Coverage

Another feature that was in private preview for MVPs and partners and has now become public is the ability to map Sentinel alerts and huntng queries with the MITRE coverage matrix.

This feature does not generate a cost as it is a dashboard that demonstrates the matrix instead of the list of items.

How MITRE is currently mapped

Today, alerts and hunting queries already bring the MITRE techniques that are being addressed as shown in the image below:


We can see that tactics and techniques are already mapped both in the header and in each of Sentinel’s active research items.

But this data is unstructured and it is necessary to click to filter the vulnerabilities with the MITRE coverage map, widely used today.

How MITRE will now be mapped with Preview

The same data from the above query can now be seen directly in the MITRE coverage map:


With this map it will be much easier to categorize the different types of vulnerabilities that I need to protect myself as it gives me the possibility in the details to open the queries that generated the different points of attention!

In addition, it is also possible with the Simulated option to map how many different types of vulnerabilities I have coverage, even if it didn’t result in the query today.


This simulation effect is very interesting because it allows me to know if Sentinel has what it takes to cover all the points that interest me and it will allow me to have a managerial view including the hunting queries that I create myself.

Public Preview Announcement: What’s Next in Microsoft Sentinel? – Microsoft Tech Community and documentation View MITRE coverage for your organization from Microsoft Sentinel | Microsoft Docs

Azure Sentinel–Log Search & Restore Preview

One of the programs that Microsoft makes available to MVPs and partners is to participate in private previews for Cloud Security features.

One of these features released to Public Preview recently was the Log Search and Restore where you can extend the Analytics log time in addition to using Sentinel itself to read the data from these stored logs.

Current Limitation

In the GA version, Sentinel keeps the logs for up to 2 years, and through the visual interface it is possible to configure 90 days and via PowerShell for up to 755 days.

In addition, when configuring for 2 years, the log ends up generating a higher cost because it is linked to the Log Analytics storage price, which is per Giga.

New Limits and Cost

In this Preview the log can now be saved for 7 years (2520 days) in addition to having a lower cost that will be published in the GA, but much lower than the current one.

As with the current GA, the change can be made via PowerShell to the 7 years.

But to help, you can use the application available on GitHub where you can choose the tables and file time for operation. In other words, you can put the alerts table for 5 years and the incidents table for 2 years.

Link to configuration app: Azure-Sentinel/Tools/Archive-Log-Tool/ArchiveLogsTool-PowerShell at master · Azure/Azure-Sentinel · GitHub

Using the Resource

I will demonstrate with prints below my Preview how I did the Restore, Search process and the final result.

Performing a Restore with the name of the table I want, the start and end date:

image

In the sequence I ran a query addressing the time I configured the Restore of the SecurityEvents table with the name of my server:

image

Finally, the result is a new Custom Table in Log Analytics with the name indicated above and the 3+ years of events restored!!!

Launch Preview Announcemnt: What’s New: Search, Basic Ingestion, Archive, and Data Restoration are Now in Public Preview – Microsoft Tech Community

Terms of Use Compliance in Azure AD Conditional Access

A need of companies with the LGPD (General Data Protection Law) is that employees, third parties and contractors with logins in the environment is to have an acceptance term.

Some companies already do it when hiring employees or hiring third parties. These terms of use often become obsolete and emailing does not always have “proof” that the contributor has read the new terms.

On the other hand, even if there is no renewal of terms, a record that the employee periodically reads to remind them of security regulations is desirable for many companies or corporations.

Creating the Terms of Use

The term of use needs to be aligned with the legal department along with HR as there are mandatory standards and terms.

Once defined, generate a PDF that will upload to the Azure portal under Conditional Access under Management – ​​Terms of Use:

image

See that this option will already be where we can audit who read the terms and also upload in different languages!

Important: See the option Periodicity where I left it as Monthly so that every month (every 30 days) users have to accept the terms again. Another important item is the Require the Terms to be Expanded option .

Creating the Conditional Access Rule for Terms of Use

To create the Access Policy, use the Policy menu :

image

In the policies, define the Terms of Use created previously as a Permission rule . See that if there are different terms of use due to collaborators and third parties have different rules, you can choose which one applies.

image

In the example above, we follow the recommendation to have a specific access policy for the Terms of Use and forcing reading.

Of course there are other options like who it applies to, combined rules, etc.

Result

And here the result, every 30 days I need to expand the Terms of Use to access Office 365 features:

In this case, I had already expanded the terms so that the Access button was enabled  Winking smile

Conditional Access Using GPS

One major change implemented in Preview at the end of November in Azure Active Directory is GPS location.

Previously we only had the option to use the IP address, however if the company used external proxies or the employee was on a network with VPN as is now the case with many modern antiviruses with Web Protection, we have a problem!.

Now it is possible to include in the conditional access policies that the user enables the cell phone GPS and thus have the real geographic location instead of the IP.

To do this, enter the Locations policies and use the option Determine location by GPS coordinates :

image

Auditing access to sensitive data in Azure SQL Database

A few weeks ago I published the article about the use of Azure Purview as a Compliance Tool and I received several questions about auditing access to sensitive data.

This is a common question, as Purview identifies sensitive data in different data sources but it does not audit access to this data with log queries.

To audit access it is necessary to use the tools of each data source, as they are different. For example file access or data exchange is done by Office 365 DLP (Information Protection), SQL Server access, etc.

Azure SQL Data Discovery & Classification

Part of the Log Analytics solution, once configured, you will have access to statistics and access details.

The first two captures below are my LAW panel with the 6 frames of the Solution where I can identify the IPs, users and data accessed.

01

02

And by clicking on any of the tables, you will have access to the query that generated the data, which will include a very important piece of data, which is the SQL used to access the data, allowing you to view in detail what was seen by the command syntax!

03

Configuring SQL Data Discovery & Classification

Resource configuration is not complex and can be done in a few minutes through the Azure portal itself.

04

There are two ways to sort the data, the first is manual. To do this, access the Classification option in the panel above and manually add the tables and their columns.

Doing so will identify the group and criticality of the column data to be categorized.

05

The second way to categorize data is using the automatic classification rules that are still in Preview but it is already possible to view the results.

Click on the Configure button on the resource panel and you will have access to the criticality labels, which are shown when in manual mode we include the columns.

06

See that in the example above I have created my own classification as “LGPD” and it includes some column names that I understand are necessary (just as an example).

To create the contents that will automatically be part of the classification, click on the Manage information types button and you will see the types already created and you will be able to include new types. In the example below it includes RG, CPF and CNPJ but could have placed aliases such as “raz%soci%” or others with wildcards (%).

07

Important: Here we are sorting COLUMN NAMES and not DATA.

Log Analytics Solutions

Once the rules or columns with sensitive data are defined, Log Analytics to which the database is mapped will show the solution installed to generate the charts included at the beginning of this article.

08

09

However, you will notice that in the monitoring panel a message will appear saying that this type of panel ( View ) is being deprecated and that you should create a Workbook with the queries. This is not necessary to do now as the solution feature will function normally.

But if you want to create a workbook, click on the resource frames to open the queries and copy them into a custom workbook.

Using Azure Application Insigths in Vulnerability Analysis

One of the tools that are commonly created in hosted web applications, Azure Application Insigths, is underutilized by the development, operations and security teams.

What is possible with App Insights?

App Inisgths captures logs and performs tasks to evaluate web app performance, stability and usage statistics.

When compared to other common tools with Google Analytics, it is important to remember that App Insigths also works as an APM (App Performance Monitoring) detailing functions and lines of code such as database calls, which are causing slowdowns.

He particularly liked some Smart detection settings functions that are common rules for detecting trends or problems, as well as metrics and Live Metrics as below. Alias, see that the contact page already demonstrates a simulated attack in the contact page call:

Performance

Metricas-1

Another interesting function that I use frequently in projects is Avaliabiliy where we can create test rules with specific pages in different locations of Azure to work like the old Global Monitoring Service.

Avaliability

The focus of this post is not detailing the functions of APM, but its use by the security team.

How is App Insights useful for Security?

First we have the Application Map where we start the analysis. Basically it’s a simple model of dependencies and communication from within and without, including the availability analysis we’ve shown earlier.

Application Map

But when removing the WAF to generate the logs and demonstrate in this post, the result was very fast as can be seen in the diagram below.

Note that the two addresses at the bottom are unknown sources and could be attacks, while you also clearly see crawlers and robots from Google and another site, but these would not be the problem.

Ataque-1

By asking above for the package details and communication exchanged with my blog with this address it is possible to see what they tried and how many times.

Ataque-2

The next step is to click on Samples or on the list on the right side to analyze the queries that were received.

As can be seen below, it is possible to identify where and how the access was carried out by this domain that was analyzing the details.

Ataque-3

But let’s keep it running longer with the WAF disabled and we’ll see it with a detailed history.

Validating real attacks

Now with more time exposed (as we like to take risks smile) my blog could have more data to be demonstrated.

Let’s open in detail the items on the map where they showed the statistics of crashes.

On the first details screen we see that just from failures in the last 24 hours I received more than 11 thousand calls!!!

Failures-1

But joy turns to sadness, or rather worry, these 11290 calls are actually part of an attack orchestrated by brute force…

Failures-2

Now let’s get a better understanding of what they’re trying to do on my blog. For that, let’s do a “walk” through the App Insigths data.

On the left side we can already see that the attacks were made by trying to send parameters and lists directly on the blog pages.

Failures-3

Opening up more details I can find out that the source of the attack is PCs in China using a specific SDK. In some cases it is also possible to see the IP and thus create a blocking list or potential attackers.

Failures-4

Here is another more recent example that had almost the same origin (another Chinese city), but with details of sophistication where a script was used and not just a SQL Injection :

Failures-4a

Continuing the walk I can see the sequence that the "user" used on my site, see that the list of attempts was long, and often on the same page:

Failures-5

The attack on the screen above is a very common SQL Injection to be used on websites by attackers. See details in CAPEC – CAPEC-66: SQL Injection (Version 3.5) (mitre.org)

What to do when detecting an attack on the website, website or application?

In general the Web Application Firewall handles most of the attacks we’ve seen happen on my blog in 24 hours, so much so that I had no idea before that they could reach almost 12,000 in just 24 hours.

But even if you have a WAF it is important that you constantly monitor the number of page failures to identify if it is an application problem or an attack that is trying to find vulnerabilities, like the examples above on my site.

Another important action is to help developers and not accept commands directly from POST, much less concatenate strings into internal parameters and commands.

Also use a robust development library, for example in my exposure test I didn’t have the blog invaded because the .NET itself already has filters to avoid commands sent directly in the POST or URL.

As a more sophisticated feature for targeted attacks, see the Create Work Items option where you can create automations, for example, stop a certain service or even bring down a server when you detect a very big anomaly!

Remembering that App Insights integrates with Log Analytics for queries and Sentinel for smart Threats security!!!!

Conclusion

If you don’t know, don’t have enabled or don’t use App Insights start now!

Don’t limit yourself to performance analytics and sessions, learn to also read the signs of security breaches before an intrusion occurs.

Azure Purview as a Compliance Tool

We have long maintained database diagrams in logical files, which are used by DBAs and developers to create applications and recently other functions for dashboards.

However, with the advancement of compliance laws such as GDPR and LGPD, knowing who has access and how to access sensitive data has become an essential resource.

Many DLP tools already use connectors for this mapping, for example the Security Center can extend to SQL Server in paid plans.

But what if we have multiple databases on different products, platforms and services? In this case we have Azure Purview.

What does Purview offer?

With the connector catalog you can include multiple data sources ranging from SQL and Oracle to AWS S3 and Azure BLOB and discover what’s being made available and automatically map data classifications and sensitivity.

For example, who are the users in Power BI who consume a certain database containing credit cards? Which storage accounts have unstructured data containing customer personal documents or medical exams?

In this line of action, we will have Purview acting, both to catalog sensitive data and to function as a data dictionary and mapping of access to company data by Power BI, for example.

Purview requirements

To use Purview it will be necessary to create an instance of execution (starts with C1 of 4 "units"), an Events Hub and a Storage Account

image

image

Purview’s cost is computed by the units and also by the scans that are performed, some of which are still in preview and with zero cost until August 2nd when I write this article Pricing – Azure Purview | Microsoft Azure

Accessing Purview Studio

All administration is done by Studio, where we create the connectors and perform the scans.

Important: Data access is used on a managed account with the name of the resource you created and following the steps for each type of resource you will map.

clip_image001

Registering Data Sources

Purview already has a series of connectors:

clip_image001[5]

The walkthrough below is the connection to the SQL Database data source. First we define the data source and the collection, which is nothing more than before the connection create groupings as you can see in the previous screen.

clip_image001[7]

Next, we define how the data will be accessed, as in the configuration above we see the server and database but we haven’t accessed it yet. For this, the type of identity will be defined, and in the example I used a managed account giving read permissions in SQL Server Management Studio (link in see more ):

clip_image001[11]

This access configuration is the SCANS, where the definitions of what will be accessed are, in the example, because it is a SQL Server connection to the database. Next, you will see that I selected the tables and which classifications I want to map (note that they are the same as in Office 365):

clip_image001[13]

clip_image001[15]

Once the scans are configured, you can define whether they will be executed once or recurrently, and this can be done by opening the objects below the collection, where the list can be seen, the different processes and details accessed:

clip_image001[19]

clip_image001[22]

clip_image001[17]

Sorting and Detailing Mapped Data

Once the scans are executed, Purview will automatically create the assets as can be seen in the Browse Assets function like the sequence below:

clip_image001[24]

clip_image001[26]

Once a mapped database is opened, we can define details, for example, create a data classification for the entire database, defining who the owners/architects of the data are and even defining a specific type for each column:

clip_image001[28]

clip_image001[30]

clip_image001[32]

clip_image001[34]

clip_image001[36]

clip_image001[38]

In this sequence of screens we can see that contacts are important to identify who knows and mapped that database. We also see how to define data description and classification individually, in addition to what Purview has automatically detected.

Still in assets, can I visualize data usage, for example which databases are being used in Power BI for PRO users? Purview will allow you to have this view as below:

clip_image001[40]

clip_image001[42]

Customizing sensitive data and creating the glossary

In the examples above and in the Purview interface, two items can be seen, one already known which is the automatic sensitivity classification and the other which is the glossary.

The classification already has the Office 365 data preloaded, which is standard for compliances that Microsoft already provides, but you can customize new ones just as it is done in Office 365 Compliance:

clip_image001[44]

Furthermore, you can create glossary terms that are nothing more than a data dictionary for consultation. It is very important that this is done, as it will be a database for administrators and other specialists to find out about, for example, specific databases.

Once the glossary entries are created, in each data source, table and column it will be possible to identify this classification as shown in the table data screen.

clip_image001[46]

It is interesting that for items it is possible to include attributes, that is, indicate that if you classify a table or column as confidential, indicate a mandatory attribute to write the reason:

clip_image002

CONCLUSION

Once mapped with Purview it is possible to have visibility of sensitive data usage, classification of data in general and build a modern data dictionary.