Michuu1337Follow
23 min read
·
Aug 29, 2025
1

Introduction — a little theory
Knowledge of proper shard management and log retention in the Wazuh SIEM system is very important for maintaining the proper condition of Wazuh instances. Without monitoring shards and implementing log retention policies, the Wazuh SIEM system may eventually stop working properly. Without defined log retention policies, all disk resources may be exhausted after a certain period of time, in which case the system will stop working and the continuity of monitoring of the given infrastructure will be interrupted. When the shard limit (set to 1000 by default) is reached, the Wazuh Indexer component will not be able to create new indexes and shards, which may cause the Wazuh Dashboard to stop working.
In this guide, you will learn how to prevent the above-mentioned problems and effectively and efficiently manage shards and the log retention process.

What are Shards in Wazuh?
The term “shards” refers to parts (fragments) of indexes that are stored and managed by the Wazuh indexing component, based on OpenSearch technology.
Explanation of shards in Wazuh:
● As a SIEM system, Wazuh collects huge amounts of logs and security events from multiple sources.
● This data is stored in OpenSearch indexes.
● Each index is divided into smaller fragments called shards.
● Shards are self-contained units of data that distribute the processing load and enable scaling.
● Shards are distributed across multiple nodes in a cluster, which increases performance and fault tolerance.
● This allows for fast searching, analysis, and aggregation of data (e.g., in the Wazuh dashboard).
The importance of shards:
● They enable parallel data processing by dividing indexes into fragments.
● They ensure high availability and data redundancy through shard replication.
● They help scale the system as the amount of monitored data and the number of agents increase.
What is log retention in Wazuh?
In a Wazuh SIEM environment, log retention policies define how long collected security events and monitoring data are stored in the system before being deleted or archived, and they are crucial because they directly affect system performance, storage usage, compliance, and overall manageability; without properly configured retention, indices in OpenSearch grow uncontrollably, consuming disk space, memory, and processing power, which leads to slower searches, higher costs, and even cluster instability, while at the same time many organizations must follow regulatory requirements such as GDPR, PCI DSS, or ISO 27001 that specify minimum or maximum log retention periods, so by setting retention policies you ensure that logs are automatically rotated, expired, or moved to cheaper storage tiers, which keeps the SIEM responsive, cost-effective, and compliant with legal and business needs.
Managing shards in Wazuh — step by step
This chapter provides step-by-step instructions on how to properly manage shards in Wazuh and verify individual issues. The topics will be divided into separate sections. It is important to note that all the commands I have included refer to my Wazuh lab infrastructure. After copying these commands to your system, you must remember to change the IP addresses in the commands (wazuh indexer ip) and the indexer username and password depending on your configuration.
Verification of cluster status — including the number of current shards
Using the command below, you can verify basic information about the status of your cluster, including information about shards. The most important information will be the following:
● “active_primary_shards”
The number of active primary shards (each index has at least one “primary shard”).
● “active_shards”
Total number of active shards (primary + replica).
● “relocating_shards”
Number of shards currently being relocated between nodes (e.g., during cluster expansion or failure).
● “initializing_shards”
Number of shards currently being initialized (e.g., after a restart or new index).
● “unassigned_shards”
Number of shards not assigned to any node (a problem if >0).
● “delayed_unassigned_shards”
Number of unassigned shards whose assignment is delayed (e.g., waiting for a node to return after a failure).
Command to check basic information about the cluster (including shards):
curl -X GET “https://192.168.85.200:9200/_cluster/health?pretty” -u admin:pleasechangemenow -k
Example output:

From the screenshot above, we can conclude that the cluster is functioning properly and is in green status. All shards are active, and there are no issues with their allocation, synchronization, or replication. Performance should be at an optimal level, with no delays or pending tasks.
An important aspect to keep in mind is that if we have a very large number of indexes containing a lot of data (events) and if we have a large number of shards, then after restarting the Wazuh instance, the Wazuh Dashboard may be unavailable for a while because the shard pool must be initialized and properly synchronized. After restarting the Wazuh instance and executing the command mentioned above, you will see the number of shards that are currently being started/indexed in the “initializing shards” section.
Identification of individual unassigned shards
The following command filters the result to display only those shards that are in an unassigned state, allowing you to quickly check for shard allocation issues in your Wazuh cluster. After executing it, you will see the specific shards that have not been assigned.
Command displaying unassigned shards:
curl -X GET “https://192.168.85.200:9200/_cat/shards?h=index,shard,prirep,state,unassigned.” -u admin:pleasechangemenow -k | grep UNASSIGNED
Example Output:

Increasing the number of available shards (temporary operation)
A situation may arise where your Wazuh cluster has reached the default limit of available shards, i.e., the shards have reached a value of 1000. In this case, the Wazuh Dashboard will stop working and you will not be able to log in to it.
In the Wazuh Dashboard component logs, you will see the following errors: “all shards failed.”
In this situation, to “free up” the number of shards, you need to delete old, unnecessary indexes. The quickest and easiest way to do this is from the Wazuh Dashboard, so to access it, you need to temporarily increase the number of shards.
Please note that this is a temporary measure and it is not recommended to increase the number of shards from the default value of 1000. This is only a workaround and it is not recommended to leave this value permanently, as it may lead to performance and stability issues with your cluster.
Command to increase the number of shards:
curl -k -X PUT https://192.168.85.200:9200/_cluster/settings -H “Content-Type: application/json” -d ‘{ “persistent”: { “cluster.max_shards_per_node”: “3000” } }’ -u admin:pleasechangemenow
Now verify that the shards have been increased using the command below. You should see a value of 3000 in the “max_shards_per_node” parameter.
Command to display cluster settings:
curl -k -X GET “https://192.168.85.200:9200/_cluster/settings?include_defaults=true&pretty” -u admin:pleasechangemenow -k

After completing these steps, wait a few minutes and you will be able to access the Wazuh Dashboard. In the next step, delete unnecessary indexes to free up shards. In the next chapter, you will learn how to delete indexes. After deleting the indexes, use the following command to restore the number of shards back to 1000.
Command to restore the number of shards to 1000:
curl -k -X PUT https://192.168.85.200:9200/_cluster/settings -H “Content-Type: application/json” -d ‘{ “persistent”: { “cluster.max_shards_per_node”: “1000” } }’ -u admin:pleasechangemenow
To do this, use the following command:
curl -k -X GET “https://192.168.85.200:9200/_cluster/settings?include_defaults=true&pretty” -u admin:pleasechangemenow -k | grep max_shards_per_node
You should see a result like the one in the screenshot below:

Removing individual indexes — GUI (First Option)
An index in the Wazuh system is a collection of related documents (e.g., logs, events, alerts) that are stored and organized in OpenSearch. Indexes enable quick search and access to security data that Wazuh collects from various sources.
Indexes are critical to the operation of the Wazuh dashboard and the entire SIEM system, as they enable efficient aggregation, filtering, and analysis of the vast amounts of data collected.
In the Wazuh SIEM system dashboard, i.e. in the graphical user interface (GUI), you will be able to manage indexes (display them, delete them, etc.) using specific commands, which will be presented in this subsection.
In order to perform various administrative commands in the context of the Wazuh Indexer, you need to go to the “Dev Tools” section in the Wazuh Dashboard. To do this, go to the “Indexer Management” → “Dev Tools” section.

After completing these steps, you will see a panel where you can execute individual commands:

Before deleting individual indexes, first use the following command: GET _cat/indices to display the available indexes. After executing this command, you will see the indexes displayed on the right.
Importantly, the GET _cat/indices command will display all available indexes, not just the wazuh-archives-* and wazuh-alerts-* indexes. In a moment, I will show you the commands that will display only the wazuh-archives-* and wazuh-alerts-* indexes.
Remember that to execute a given command, you must click on the green arrow icon. This action will enable the execution of the command. Example below:

Example Output:

If you want to display only the wazuh-alerts-* and wazuh-archives-* indexes, you must use the following commands.
Display only wazuh-alerts-* indexes:
GET _cat/indices/wazuh-alerts-4.x-*?v&s=index
Display only wazuh-archives-* indexes:
GET _cat/indices/wazuh-archives-4.x-*?v&s=index
Display both wazuh-archives-* and wazuh-alerts-* indexes:
GET _cat/indices/wazuh-alerts*,wazuh-archives*?v&s=index
You can manage the displayed data in various ways. For example, if you want to display the wazuh-alerts-* indexes in JSON format and see how much disk space a given index takes up and how many entries (generated events) it has, you can use the following command. The displayed indexes will also be sorted in descending order by number of entries and size.
Command to use:
GET _cat/indices/wazuh-alerts*?format=json&h=index,docs.count,store.size&s=docs.count:desc
Example Output:

In the following example, I will finally (after a long introduction) show you how to delete a given index and how to delete multiple indexes at once using a single command. So let’s get to work.
Depending on which indexes you want to delete, you first need to know the full name of the index. So, in the first step, you should display all available wazuh-alerts-* or wazuh-archives=* indexes, or all indexes from both at once. In this example, I will show you how to delete an index from wazuh-alerts-* and indexes from wazuh-archives-*.
First, I display the available wazuh-alerts-* indexes using the command:
GET _cat/indices/wazuh-alerts-4.x-*?v&s=index
Below is an example output:

In the next step, after displaying the available indexes, you need to identify the index you want to delete. Copy the name of this index (in my case, it will be the index named: wazuh-alerts-4.x-2025.08.21) and use the following command to delete this index:
DELETE /wazuh-alerts-4.x-2025.08.21
After executing this command, you should see the following output:

This means that the index has been successfully deleted. To make sure that the index has been deleted, run the delete index command again. You should see the following result:

This result is correct and informs you that the index you tried to delete does not exist, so it has been successfully deleted.
I showed you how to delete individual indexes. This is the correct methodology, but when working with a large number of indexes and in certain situations, you will certainly want to delete multiple indexes at once. Now I will show you how you can do this with a single command.
In this case, I will work on the wazuh-archives-* indexes. I will show you how to delete all indexes from a given month.
To delete more wazuh-archives-* indexes from a given month, first display all wazuh-archives-* indexes using the following command:
GET _cat/indices/wazuh-archives-4.x-*?v&s=index
Example Output:

Note that each index has the date it was created and the specific day it refers to in its name. The format is as follows: year → month → day.
To delete indexes from a given month (in my case, it will be May 2025), use the following command: DELETE /wazuh-archives-4.x-2025.05.*
You should see the following result:

To verify that all indexes from May 2025 have been deleted, run the command displaying all wazuh-archives-* indexes again.
Command to run:
GET _cat/indices/wazuh-archives-4.x-*?v&s=index
Example Output:

Comparing the above screenshot to the previous one I posted earlier, you can see that all wazuh-archives-* indexes from May 2025 have been deleted. Using the previously used command (example for data from May 2025): DELETE /wazuh-archives-4.x-2025.05.* you can delete all indexes from a given month.
However, you should be very careful with this methodology. Adding an asterisk (*) at the end (a so-called wildcard) specifies all data from May in the current example. Be careful when adding wildcards so that you do not accidentally delete data that you will need later.
Removing individual indexes — GUI (Second Option)
There is also another way to delete individual indexes. In this methodology, you will no longer use individual administrative commands, but will perform these actions entirely graphically by clicking on individual sections in the Wazuh Dashboard. So let’s get started!
First, go to the Wazuh Dashboard, then open the navigation menu and go to the “Indexer Management” → “Index Management” section.

Next, go to the “Indexes” section by clicking on it. In this section, you should see all available indexes.
You should see a result similar to the screenshot below:

In this example, I will show you how to delete individual indexes. I will perform the operations to delete individual indexes on the wazuh-archives-* indexes. To display the wazuh-archives-* indexes, enter the following phrase in the search field: wazuh-archives.
You should see all wazuh-archives-* indexes:

To delete individual indexes on the left side, select the indexes on which you want to perform the operation (in this case, we will delete the selected indexes) and click on the “Actions” section. Then click on “Delete.” A window will appear showing the names of the indexes you want to delete. To confirm the deletion of the indexes, type the phrase “delete” and click on “Delete.”
Below are screenshots showing this index deletion operation:


After completing all of the above steps, you should see the following result. This means that the index deletion operation was successful. To confirm the result and make sure that the indexes have actually been deleted, filter the indexes wazuh-archives-* again in the “indexes” section and verify that the deleted indexes are not there.

In summary, regular index deletion in Wazuh primarily frees up disk space and restores the availability of shard resources in the cluster.
Benefits:
● Deleting old indexes frees up stored disk space, allowing new data to be written without the risk of filling up the partition.
● Deleting indexes frees up shards, allowing the cluster to function properly and avoiding shard limits and data availability issues.
Remember to regularly monitor the number of available shards and disk resources, and to create log retention policies.
You can learn about the process of creating log retention policies in the chapter entitled “Index retention management”.
Verification of Indexer logs related to shards
You can view logs at any time to find out about various issues with individual shards. Most often, you will see logs informing you that a given shard is unavailable. Usually, a shard will be unavailable when, for example, you have restarted the entire Wazuh instance and the shards need to be reinitialized. Depending on the number of indexes and shards, the shard initialization process may take a while.
To view the logs related to shards, use the following command:
cat /var/log/wazuh-indexer/wazuh-cluster.log | grep “shards”
You will see output similar to the screenshot below:

Changing the number of indexed shards
Reducing the number of indexed shards from the default value of 3 to 1 in Wazuh is a good practice for a single-node cluster or in the case of a small data volume.
Key benefits:
● Resource savings — Fewer shards mean lower RAM and CPU consumption, faster searches, and simpler administration.
● No oversharding — Eliminating redundant shards prevents platform limits from being exceeded and data fragmentation, which can cause slowdowns and data unavailability.
● Better node utilization — For a single node (e.g., test, small production), 1 shard is the optimal configuration. For larger environments, the number of shards should be equal to the number of nodes to optimally distribute the load and ensure resilience.
In order to define the number of shards for a given index, you must first download the configuration file “w-indexer-template.json”. Use the following command to download this configuration file:
Get Michuu1337’s stories in your inbox
Join Medium for free to get updates from this writer.Subscribe
Please note that if your Wazuh instance does not have Internet access, the w-indexer-template.json file will not be downloaded. In this case, download it to a host with Internet access and upload it to the Wazuh server using, for example, the winscp application.
You should be able to see the “w-indexer-template.json” file in the directory where it was downloaded:

Edit the file. In my case, I will use the nano editor. Use the following command to open the file:
nano w-indexer-template.json
You will see that the “index.number_of_shards” parameter is set to 3. This is the default setting.

The “index.number_of_shards” parameter specifies how many fragments (main shards) the index will be divided into. Each shard is, in practice, a separate Lucene unit, i.e., an independent database that can be stored and processed on a separate node in the cluster. This allows indexes to be distributed and queries to be executed in parallel on multiple nodes, improving scalability and performance.
The number of indexed shards should correspond to the number of Wazuh nodes we have in our infrastructure. If you have an “all-in-one deployment,” it is recommended to change the default number from 3 to 1 for the “index.number_of_shards” parameter.
Reducing the number of shards to 1:
● minimizes resource overhead,
● simplifies cluster management,
● improves performance and stability,
● saves disk space,
● is consistent with current best practices for logs and moderately sized index data
In summary, in order to avoid problems with shards (reaching their limit, i.e., a value of 1000) and filling up disk space, it is recommended to change the value from 3 to 1 in the index.number_of_shards attribute.
On the other hand, if you have a Wazuh cluster set up with, for example, two Indexers, the value of index.number_of_shards should be set to 2, corresponding to the number of indexers.
In my case, I have a Wazuh infrastructure based on two Wazuh instances: Master and Worker. I have one Wazuh Indexer implemented on each of these instances, which means I have two Indexers, so I should set the value of “index.number_of_shards” to 2. If, for example, you have an “all-in-one” implementation, meaning you have one node, you should set the value of this parameter to 1.

I have discussed what the “index.number_of_shards” parameter is, so now I will discuss another important configuration parameter, which is “index.number_of_replicas”.
A replica is an additional copy of the data stored in an index shard. Replicas are maintained on nodes other than the primary shards, which ensures high availability and fault tolerance.
Rules for setting the number of replicas
● Default: In most installations, it is recommended to set 1 replica for each shard (“index.number_of_replicas”: “1”), which means that each shard has exactly one additional copy in the cluster.
● Minimum: If you only have one data node in your cluster, the number of replicas should be 0, because there is no node on which to store the replica.
● When to use more replicas? The more nodes you have in your cluster and the greater the availability or read performance you need, the more replicas you can configure — but each replica means additional disk and resource consumption.
Practical rule:
● 1 replica for a minimum of two data nodes — this is the most common and safe setting in production.
● 0 replicas for test environments or single-node development.
In summary, it is best to have a number of replicas equal to (or less than) the number of data nodes minus one. For example:
● If you have 2 data nodes → 1 replica.
● If you have 3 or more nodes → you may consider 2 or more replicas if you really care about high availability and performance.
In my case, I have two nodes, so I set the number of replicas to 1.
Now my configuration looks like this:

Confirm settings by loading them
After completing the above configuration steps, you now need to load these settings into the system.
Below is the command to load the previously defined settings:
curl -X PUT “https://<INDEXER_IP_ADDRESS>:9200/_template/wazuh-custom” -H ‘Content-Type: application/json’ -d @w-indexer-template.json -k -u <INDEXER_USERNAME>:<INDEXER_PASSWORD>
If the above command was successful, you should see the following output: {“acknowledged”:true}
In the next step, you need to check that the settings have been loaded correctly. To do this, use the command below. Of course, remember to fill in the individual sections of the command with your data (Indexer IP address and credentials).
Command to execute:
curl “https://<INDEXER_IP_ADDRESS>:9200/_template/wazuh-custom?pretty&filter_path=wazuh-custom.settings” -k -u <INDEXER_USERNAME>:<INDEXER_PASSWORD>
Example Output:

Reindexing of a given index
It is important to note that the changes made earlier will only apply to newly created indexes. If you want the changes made earlier to be applied to existing indexes, you need to “reindex” the index and create a new index to which these changes will be applied immediately.
This can be done from the Wazuh GUI (Dashboard).
To reindex a given index, go to: Indexer Management → Indexes → Actions → Reindex. Of course, before reindexing, select the specific index on which you will perform this operation.

The first step is to create the target index to which data from the previous index will be assigned. Next, in the “Create Index” section, create this index by giving it an appropriate name and settings — the number of shards and replicas. The number of shards and replicas assigned should be the same as previously set in the file — w-indexer-template.json. After making these changes, click on “Create Index.” In the final step, click on “reindex” and wait about several seconds, and an output should appear indicating that the index has been successfully reindexed.
The following screenshots show the steps taken to reindex the index:





After completing the above steps, you should see the following output:

In the final step, click on the index you created earlier, which is shown in the screenshot above. This will display the details of that index. If you have completed all the previous steps correctly, you will see the defined number of primary shards and the number of replicas you defined earlier.
The screenshot below shows my current configuration for the newly created index.
New indexes that will be created automatically by Wazuh will already have the parameters you defined in the w-indexer-template.json file.

Index retention management
You already know how to manage shards and indexes in Wazuh. In this chapter, I will show you how to manage the retention of created indexes, i.e., how long created indexes containing data (events and logs) should be stored.
Proper log retention management is a very important point.
By default, you will not have any retention policies created in Wazuh. This means that the indexes that have been created will be stored indefinitely and will not be automatically deleted after a certain period of time.
When your Wazuh instance collects a large amount of data and you have a large number of indexes without retention policies, after some time, the disk space may be completely used up, which will cause your Wazuh to stop working because the data will no longer be able to be saved anywhere.
In addition, if you do not configure retention policies in Wazuh, after some time you will exhaust the default limit of open shards per node (1000 shards per node by default), because each new indexing creates new shards that are not automatically deleted.
Okay, now you know that retention policies are very important. Now the question arises. How long should you store data indexes in the Wazuh Dashboard? It depends.
The data retention period in systems such as Wazuh depends largely on legal and regulatory requirements and internal organizational policies. First and foremost, every organization has different procedures and may be subject to different regulations. If you have implemented Wazuh in your organization and are creating index retention policies, you should familiarize yourself with the organization’s regulations or ask management directly how long the data should be stored. You can edit the retention policy you have created at any time. I will show you how to do this in the practical section.
After this brief introduction, let’s move on to the practical part!
Creating index retention policies
To begin creating retention policies, go to the hamburger menu → Indexer management → Index management.

In the next step, go to the “State Management Policies” section and click on “Create Policy.”

After completing this step, you will see a view with two options to choose from: Visual Editor and JSON Editor. In my case, I will use the JSON Editor. Click on “JSON Editor.” After completing this step, you will see the policy creation view as shown in the screenshot below:

In my case, I will create a retention policy that will be responsible for deleting indexes for alerts (wazuh-alerts-*) older than 30 days. This is a really short index retention period, but importantly, I am doing this in my lab environment, so I allowed myself a short retention period.
Remember that if you have implemented Wazuh in an organization, you need to agree with management on how long the indexes should be stored!
Below is the retention policy that is responsible for deleting wazuh-alerts-* indexes older than 30 days. In practice, this means that if 30 days have passed since the index was created, it will be deleted automatically.
The parameter in the policy below that is responsible for this action is “min_index_age”. In this parameter, you specify the minimum “age” of the index.
You can customize this policy to suit your needs. Remember that in the “min_index_age” parameter, you must specify the number of days for which the indexes are to be stored.
Policy for retaining wazuh-alerts-* indexes for 30 days:
{
“policy”: {
“policy_id”: “wazuh-alert-retention-policy-for-30d”,
“description”: “Wazuh alerts retention policy for 30 days”,
“schema_version”: 17,
“error_notification”: null,
“default_state”: “retention_state”,
“states”: [
{
“name”: “retention_state”,
“actions”: [],
“transitions”: [
{
“state_name”: “delete_alerts”,
“conditions”: {
“min_index_age”: “30d”
}
}
]
},
{
“name”: “delete_alerts”,
“actions”: [
{
“retry”: {
“count”: 3,
“backoff”: “exponential”,
“delay”: “1m”
},
“delete”: {}
}
],
“transitions”: []
}
],
“ism_template”: [
{
“index_patterns”: [
“wazuh-alerts-*”
],
“priority”: 1
}
]
}
}
If, for example, you want to create a retention policy for wazuh-archives-* indexes, all you need to do is enter wazuh-archives-* in the “index_patterns” parameter in the above policy. In this parameter, you specify which specific index this retention policy should apply to.
You must paste the created index retention policy into the “Define Policy” section. This will replace the default entry. In the “Policy ID” section, give this policy a name. After completing these steps, click “Create” in the lower right corner to create this policy.
The screenshot below shows an example of how it looks in my case:

If you have done everything correctly, you will see the following output after creating the policy:

You will also see that the retention policy you created appears in the State Management Policies section:

In the next step, you need to assign this retention policy to the individual indexes to which you want to apply it.
To do this, go to the “Indexes” section and enter the phrase “alerts” in the search field. In my case, as I mentioned earlier, I created a retention policy for indexes with alerts (wazuh-alerts-*), so I enter the phrase “alerts.” Then select the indexes to which you want to apply the previously created retention policy and go to the ‘Actions’ → “Apply Policy” section.


In the next step, select the retention policy you created earlier and then click “Apply.”

To confirm that the retention policy has been correctly assigned to the indexes, go to the “Policy managed indexes” section. There you should see the names of the indexes to which the retention policy has been applied.
Pay particular attention to the following fields
· State
· Info
· Job Status
Immediately after creating the policy, you will see Job Status set to Running, while the State and Action sections will be empty at first. This is normal. Wait a few minutes and in the “State” section you should see the state set to “retention state.” In the Action section, you will see the status “Transition”. In turn, in the “Info” section, you will see the following output: “Evaluating transition conditions [index=wazuh-alerts-4.x-2025.08.25]”.
This means that ISM (Index State Management) regularly checks whether the index has reached 30 days, so an action is performed on these indexes. In this case, the action verifies the “age” of the index and waits until the index is older than the specified 30 days in the retention policy. Once the index reaches its “age,” it will be automatically deleted.
The following screenshots show how it should look:


Summary
Managing shards, indexes, and index retention policies in Wazuh is crucial because it directly affects the performance of the entire system — too many small shards or uncontrolled index growth causes excessive load on the OpenSearch cluster, extends the time needed for searching and analyzing logs, and can lead to memory overload and slow down the Wazuh manager. Lack of retention results in the accumulation of huge amounts of data, which not only takes up disk space, but also hinders fast query execution and increases the risk of failure. Therefore, properly selected shard and index sizes and a well-configured retention policy guarantee stability, optimal event analysis speed, better resource utilization, and avoid system availability issues.
Wazuh Ambassadors Program: https://wazuh.com/ambassadors-program/?utm_source=ambassadors&utm_medium=referral&utm_campaign=ambassadors+program
Wazuh Webpage: https://wazuh.com/?utm_source=ambassadors&utm_medium=referral&utm_campaign=ambassadors+program
Contact me
If you have any questions, please contact me on LinkedIn.
My Linkedln Profile: https://www.linkedin.com/in/%F0%9F%9B%A1%EF%B8%8Fmicha%C5%82-bednarczyk-2580a6228/
Leave a Reply