How does a small, landlocked country with no natural resources become one of the richest in the world? Switzerland’s story shows that long-term success isn’t built on what you have—but on how you manage it.
Instead of relying on natural wealth, Switzerland built a reputation for neutrality, safety, and trust. Surrounded by powerful nations, it avoided conflicts while maintaining strong defense capabilities. Over time, this made it a secure place for people to store wealth—especially during periods of war and political instability across Europe.
Switzerland didn’t just attract money—it created systems to protect it. Its banks developed a reputation for privacy, stability, and careful management. The Swiss currency remained strong because the country prioritized low inflation and financial discipline, avoiding risky policies common elsewhere.
Strict banking rules, balanced government budgets, and a focus on long-term stability helped Switzerland build a reliable financial system. These decisions may limit short-term gains, but they ensure resilience over time.
Switzerland proves a powerful cycle:
Stability → Trust → Capital → Strong Institutions → More Stability
This model shows that consistent rules and discipline can create lasting economic strength—even without natural advantages.
Original Video: Switzerland Had Nothing And Built A Fortress. America Has Everything And Is Losing It
As AI systems become more powerful, they also become harder to understand. That’s where observability comes in — the ability to see, track, and understand what’s happening inside an AI system in real time. According to Microsoft, improving observability is key to building safer and more reliable AI.
Observability goes beyond basic monitoring. It helps teams:
This is especially important because AI systems can change behavior as they learn or interact with new data.
Without strong observability, organizations face serious risks:
To improve AI observability, organizations should:
Observability is not just a technical feature — it’s a foundation for trustworthy AI. By making systems more transparent and easier to inspect, teams can respond faster, reduce risks, and build confidence in AI-driven decisions.
Original article: Observability for AI Systems: Strengthening visibility for proactive risk detection
The AI agent space is growing fast—but surprisingly, much of it is concentrated in just one area. Understanding this imbalance can help builders, founders, and curious readers spot where the real opportunities lie.
A large portion of today’s AI agent market is focused on developer tools and coding assistants. These agents help with writing, debugging, and managing code. Because developers are early adopters and already comfortable with AI, this category has expanded quickly.
Outside of coding tools, the market is still wide open. Many industries have not yet fully adopted AI agents, leaving room for innovation.
Some promising areas include:
The current landscape shows a pattern of early concentration followed by expansion. While developer-focused AI agents dominate today, the next wave will likely come from solving real-world problems in less technical fields.
Key takeaway: The biggest opportunities may not be where everyone is building—but where few have started.
Read more: https://garryslist.org/posts/half-the-ai-agent-market-is-one-category-the-rest-is-wide-open
When production issues happen, logs should clearly show what started, what happened in between, and how it ended. A simple and consistent pattern using _logger with Application Insights can make troubleshooting much easier.
Here is a practical example for an operation like retrieving role assignments.
Log the action and key parameters at the beginning:
const string action = "Retrieve role assignments for scope";
var stopwatch = Stopwatch.StartNew();
_logger.LogInformation("Start {Action}. Scope={Scope}, RoleId={RoleId}",
action, scope, roleId);
This creates structured properties (Action, Scope, RoleId) that are searchable in Application Insights.
Log meaningful steps inside the method:
_logger.LogInformation("Flow {Action} – calling external service. Scope={Scope}",
action, scope);
Only log major steps. Avoid too many details.
When successful:
stopwatch.Stop();
_logger.LogInformation("End {Action}. Count={Count}, ElapsedMs={ElapsedMs}",
action, result.Count, stopwatch.ElapsedMilliseconds);
Include result summaries (like counts) and execution time. This helps detect performance issues.
On failure:
catch (Exception ex)
{
stopwatch.Stop();
_logger.LogError("Fail {Action}. Scope={Scope}, RoleId={RoleId}, ElapsedMs={ElapsedMs}, Exception={Exception}",
action, scope, roleId, stopwatch.ElapsedMilliseconds, ex);
throw;
}
Always pass the exception to LogError. Application Insights will capture the stack trace automatically.
With this consistent approach, logs become a reliable story of what your system did — and why.
When an application stops responding or needs a refresh, restarting its Windows service is often the quickest fix. With PowerShell, you can do this in just one line.
The simplest method is:
Restart-Service -Name "ServiceName"
Replace "ServiceName" with the actual service name. This command safely stops and starts the service in one step.
If the service is stuck, you can force it:
Restart-Service -Name "ServiceName" -Force
For more control, stop and start it manually:
Stop-Service -Name "ServiceName" -Force
Start-Service -Name "ServiceName"
Search by partial service name
You can search with wildcards using::
Get-Service -Name "PartialName*"
The * acts as a wildcard and matches all services that start with that text.
You can also search by display name:
Get-Service | Where-Object {$_.DisplayName -like "*partialname*"}
This helps you quickly find the correct service before restarting it.
Most service operations require administrator rights, so run PowerShell as Administrator.
These commands give you a fast and reliable way to manage Windows services. Once you understand these basics, troubleshooting system issues becomes much easier.
Sometimes configuration files are not valid JSON. They may contain comments or formatting issues. In such cases, ConvertFrom-Json will fail. When that happens, you can extract the environment section using a multiline regular expression.
Small example file:
{
"application": "SampleApp",
"environment": {
"name": "Production",
"debug": false
},
// comment
"logging": "Information"
}
PowerShell Regex solution:
$content = Get-Content "appsettings.json" -Raw
if ($content -match '(?s)"environment"\s*:\s*\{.*?\}') {
$matches[0]
}
Explanation:
(?s) allows the dot to match across multiple lines.*? ensures non-greedy matchingenvironment objectThis method works well when:
Important: Regex does not fully understand nested JSON structures. Use it only when parsing cannot be used.
For Azure RunCommand scenarios with limited console output, this method provides a focused and practical solution.
If your configuration file is valid JSON, parsing it is the safest and most reliable method.
Small example file:
{
"application": "SampleApp",
"environment": {
"name": "Production",
"debug": false
}
}
PowerShell solution:
$content = Get-Content "appsettings.json" -Raw
$json = $content | ConvertFrom-Json
$json.environment
This command:
environment nodeIf you want formatted JSON output:
$json.environment | ConvertTo-Json -Depth 3
Why this method is recommended:
Always use this approach when the file is valid JSON. It is clean, simple, and suitable for production environments.
When using Azure RunCommand on a virtual machine, the output console is limited. If you print an entire configuration file, large parts may be cut off. This makes troubleshooting difficult, especially when you only need a small section such as the environment node.
Instead of displaying the full file, you can extract just the required part. This keeps the output small, clear, and readable. It also helps you work safely with large configuration files in production systems.
In this guide, you will see two practical approaches:
Both approaches are written for PowerShell and work well inside Azure RunCommand. The examples use small sample files to stay focused on the solution itself.
Each section explains one method clearly so you can choose the right technique for your scenario.
Code highlighting improves readability and helps readers understand examples faster. When using TinyMCE together with Highlight.js, many developers notice that JavaScript highlighting works, but JSON or plain text does not. This problem is common—and luckily easy to fix once you know where to look.
The first step is using the correct language identifiers in TinyMCE. The values defined in codesample_languages must match the language names that Highlight.js understands. JavaScript works because it is included by default, but JSON and plain text need special attention.
TinyMCE configuration
codesample_languages: [
{ text: 'Plain Text', value: 'plaintext' },
{ text: 'JSON', value: 'json' },
{ text: 'JavaScript', value: 'javascript' }
]
Next, make sure Highlight.js actually loads the required languages. Some builds do not include JSON or plain text automatically, so they must be added manually.
Highlight.js setup
<link rel="stylesheet"
href="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/11.5.1/styles/vs.min.css">
<script src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/11.5.1/highlight.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/11.5.1/languages/json.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/11.5.1/languages/plaintext.min.js"></script>
<script>
hljs.highlightAll();
</script>
Finally, remember that TinyMCE content is often added dynamically. Highlight.js only runs once by default, so new code blocks must be highlighted again after rendering.
Re-run highlighting
document.querySelectorAll('pre code')
.forEach(el => hljs.highlightElement(el));
With the correct language values, loaded modules, and proper initialization, JSON and plain text code blocks will highlight reliably and look consistent across your site.
Have you ever tried to use a feature in Azure DevOps and suddenly hit a wall? You see an error, but it is not clear whether the problem is your license or your permissions. This confusion is common, but the difference is simple once you know what to look for.
Error messages usually tell the truth.
Messages like “Access denied”, “Not authorized”, or “You do not have permission” point to a permission issue. You have the right license, but your account is not allowed to perform that action.
Messages that mention “access level”, “upgrade”, or “this feature requires Basic access” clearly indicate a license issue.
The user interface is another strong signal.
If a feature is completely missing (for example, pipelines or repositories), this is usually a license limitation.
If the feature is visible but blocked when you click it, the problem is most likely permissions.
Some features are locked behind higher access levels. For example, creating pipelines or pushing code requires a full user license, while advanced testing features need an even higher level. If your access level does not include the feature, no permission change will help.
For a final answer, an organization admin should check two things: your access level in user management, and your permissions in project security. Together, these checks always reveal the real cause.