why should you use Swashbuckle over Microsoft open api extension
- MS open api extension does not support JsonConverters.
- MS open api does not throw a meaning full exception when it fails to generate the swagger document.
why should you use Swashbuckle over Microsoft open api extension
so how do we solve it ?
So basically the outbox pattern provides “Exactly Once Delivery” processing (transport transaction + AtLeastOnceDelivery)
by default asb is FIFO in nature, first in and first out. But the business required message order is something that the application publisher and consumer should handle.
the following are the patterns that we could use to accomplish this.
the only way to assign a managed identity is via the PS or az cli, command below, the UI role assignments for example “Owner” are only for the managment. For any roles with the data plane should use the following.
resourceGroupName=”
accountName=”
readOnlyRoleDefinitionId = ”
principalId = ”
az cosmosdb sql role assignment create –account-name $accountName –resource-group $resourceGroupName –scope “/” –principal-id $principalId –role-definition-id $readOnlyRoleDefinitionId
When you start building distributed systems/services, we quite often end up using some sort of messaging.
Any DB writes must definitely go through a durable messaging system with either sagas/transaction.
Any DB reads DO NOT need to go through a messaging or a transaction/Saga, we should always have a separate flow/models for queries.
Azure storage and Azure webapps are most widesly used services in azure. The storage account security is implemented via using client secret, managed identity (storage tables is not yet supported) and SAAS tokens.
However if we want to secure at the network level we have the following options
Vnet
using a private endpoint
using a service endpoint
and finally the third option is using the storage account Firewall, this blog focuses on the firewall option with an app service is talking to a azure storage account.
by default Allow access from all networks is enabled, unless if you have specified otherwise while creating the account. To enable the firewall, we have to select the “Selected Networks” option, Once we do this the Azure shows a Firewall list box where we can enter the IP’s of the services that we want to access the storage account.
If you services that are calling into this storage account are in the same region, azure ignores these setting yep, you read it correct if the caller service and azure storage account are in the same region, azure does not respect the entries. This setting only works if the storage account is in a different region. (We should probably ask MS why ?).
So if your org is fine with having you appservice and storage account in diffrent regions, and latency is not an issue this is the approach to take, unless you want to PAY for Vnet option, which is only available from the “Standard” and above app service sku, which is about $100 per month, excluding the storage account charges.
For the appservice and storage account firewall configuration, get the outbound ip’s of the app service and add them the storage account’s firewall list. If any other “Azure” service needs access to the storage account say for example Azure resource manger, then the public IP of these need to be whitelisted as well.
The scenario is a background processor which does the following :
A and B are idempotent so even when we scaled the app we did not see any issues, as Azure table storage was not using any concurrency (Last Write wins).
Now the new scenario is
Now the Last Write wins strategy is not going to work because sending a mail is not idempotent, if only a single instance of this was running it is still fine but if we scale this say 2 instances, users will be receiving mails twice.
Solutions 1: Using blob lease
We can create a Blob and create a lease, which the background processor can check and only retrieve records when the lease is available, you can corelate this with a SP in SQL which fetches records using a LOCK statement, using some flag say “IsProcessed”.
This solution works but though we scale the processors at any point of time only a single processor would be working and the remaining would be just waiting for the lease to be available.
Solution 2: using table storage queue
The advantage of this over the blob approach is
while running the following docker command, i was running in to the issue “–env”: executable file not found in $PATH: unknown.” as you see below all i am trying to do is pass few environmental variables to the run command.
docker run 84aa8c74fbc8 --env azclientId='00000000000000' --env azclientSecret='0000000000' --env aztenantId='00000000000'
but what the docs dont mention is that we need to pass the image after the –env variables so the following run command fixed the issue.
docker run --env azclientId='00000000000000' --env azclientSecret='0000000000' --env aztenantId='00000000000' 84aa8c74fbc8
the title of the blog post is self explanatory, but what i want to highlight here is the significance of Path.Combine, most of the time i see code where we concatenate the folder name and the filename for ex:
var path =”C:\\testfolder\\filename.txt”;
this works as expected but imagine moving the code to a linux environment ?? this small line of code will break the app completely.
Path.Combine like Environment.Newline, sets up the path based on your respective OS, for linux it would /mnt/ss/filename.txt, for windows it would be c:\ss\filename.txt
await DownloadFile(http://sfasfsa.com/safdas/main.txt, Path.Combine( Environment.CurrentDirectory, "filename.txt"));
private static async Task DownloadFile(string uri, string outputPath)
{
if (!Uri.TryCreate(uri, UriKind.Absolute, out _))
throw new InvalidOperationException("URI is invalid.");
var stream = await _httpClient.GetStreamAsync(uri);
await using var fileStream = new FileStream(outputPath, FileMode.Create);
await stream.CopyToAsync(fileStream);
}
public static void Main()
{
var host = new HostBuilder()
.ConfigureAppConfiguration(e =>
e.AddJsonFile("appsettings.json", optional: false, reloadOnChange: true).AddEnvironmentVariables().Build()
)
.ConfigureServices(services =>
{
services.AddSingleton<DomainFacade>();
})
.ConfigureFunctionsWorkerDefaults()
.Build();
host.Run();
}
3. Update the Function with the following
public class ConfigurationTest
{
private readonly DomainFacade _domainFacade;
private readonly IConfiguration _configuration;
public ConfigurationTest(DomainFacade domainFacade, IConfiguration configuration)
{
_domainFacade = domainFacade;
_configuration = configuration;
}
[Function("ConfigurationTest")]
public async Task Run([TimerTrigger("0 */1 * * * *")] FunctionContext context)
{
var logger = context.GetLogger("Function1");
logger.LogInformation($"C# Timer trigger function executed at: {DateTime.Now}" + _configuration["CosmosDb:AccountPrimaryKey"]);
await _domainFacade.DomainMethod(string someValue );
}
}
4. And this will work locally as well as on the cloud, without tinkering with the path of the appsettings, the example is for a timetrigger function, but this will work for any trigger.