Monday, December 16, 2019

AWS Param store put with urls via CLI

seems the CLI will follow urls given to it. which for param store is not very useful when you want to store the website address instead.


the work around.

aws configure set cli_follow_urlparam false


per
https://github.com/aws/aws-cli/issues/1475


then this works

aws ssm put-parameter --name "/myappconfig/" --type "String" --value "http://enhanceindustries.com.au/" --region ap-southeast-2


Friday, December 13, 2019

Australian API Standards


This just came accross my desk.

Looks like an awesome set of specs to build to.

https://api.gov.au/standards/national_api_standards/index.html

AWS Cloudformation how to swap between Ip's and AliasTarget via conditions


Below Allows you to swap between cloudfront and a static ip address.

Note the new line after the - for the If statement, this tells yaml that this is an object array, you need to replicate everything in the object array as cloudformation does not merge objects arrays together.



WebDNSRecordSet:
  Type: AWS::Route53::RecordSet
  DependsOn:
    - DistributionConfig
  Properties:
    Fn::If:
      - IsIPRestricted
      -
        HostedZoneName: !Sub "${Domain}."        ResourceRecords:
          - 123.123.123.123
        TTL:  '900'        Name: !Sub "www.${Domain}."        Type: A
      -
        HostedZoneName: !Sub "${Domain}."        AliasTarget:
          DNSName:
            Fn::GetAtt: [DistributionConfig, DomainName]
          HostedZoneId: "Z2FDTNDATAQYW2"        Name: !Sub "www.${Domain}."        Type: A

https://www.json2yaml.com/

When run through https://www.json2yaml.com/

you get 

{
  "WebDNSRecordSet": {
    "Type": "AWS::Route53::RecordSet",
    "DependsOn": [
      "DistributionConfig"
    ],
    "Properties": {
      "Fn::If": [
        "IsIPRestricted",
        {
          "HostedZoneName": "${Domain}.",
          "ResourceRecords": [
            "123.123.123.123"
          ],
          "TTL": "900",
          "Name": "www.${Domain}.",
          "Type": "A"
        },
        {
          "HostedZoneName": "${Domain}.",
          "AliasTarget": {
            "DNSName": {
              "Fn::GetAtt": [
                "DistributionConfig",
                "DomainName"
              ]
            },
            "HostedZoneId": "Z2FDTNDATAQYW2"
          },
          "Name": "www.${Domain}.",
          "Type": "A"
        }
      ]
    }
  }
}

Friday, July 26, 2019

Cloudfront With Squiz Edge being an origin

Just wanted to share an insight on trying to do an Squiz CMS overlay beside another product that has no CMS features. I.E a homepage of a COTS solutions.


Problem space:
I want to have the squiz edge as an origin in cloudfront so that i can have some pages fully managed by the content team without developer intervention.

Issues encounted
cloudfront started returning 502 errors which their documentation relates to ssl issues.
Squiz the company needs to be contacted to do an update on their squiz edge system to acknowledge your hostname as well as setup required inside squiz matrix.

Outcome:

Cloudfront has two rules for SSL passthrough,

Rule 1: Origin Domain Name you request against must  match ssl cert
Rule 2: If Rule1 fails, Host header must match ssl cert.

If my front end domain is zyx.
My Origin Domain Name is lpo
If the origin returns ssl cert zyx or lpo it will pass. If it passes abc it will fail.

Now in relation to Squiz.

When we do a low level ssl cert check against their staging edge network if the server name is the hostname, we get *.squizedge.net
If its (valid configured domain without custom ssl cert) we get *.clients.net.au if its (invalid domain) we get *.squizedge.net

What we want is to have *.squizedge.net to be provided to as instead of *.clients.squiz.net cert. 


Below is how to test.

openssl s_client -showcerts -servername staging.squizedge.net -connect staging.squizedge.net:443
depth=0 C = AU, ST = New South Wales, L = Sydney, O = SQUIZ PTY LTD, CN = *.squizedge.net

openssl s_client -showcerts -servername (valid domain without custom ssl cert) -connect staging.squizedge.net:443
depth=0 C = AU, ST = New South Wales, L = Sydney, O = Squiz Australia Pty. Ltd., CN = *.clients.squiz.net

openssl s_client -showcerts -servername (invalid domain) -connect staging.squizedge.net:443
depth=0 C = AU, ST = New South Wales, L = Sydney, O = SQUIZ PTY LTD, CN = *.squizedge.net

openssl s_client -showcerts -servername (valid domain with custom ssl cert) -connect staging.squizedge.net:443
(valid ssl cert depth=0 provided)

ALSO 
do ensure you are picking tls1.1 or higher as ssl3 handshake with squizedge is an instant deny

 openssl s_client -showcerts -connect staging.squizedge.net:443 -servername (valid domain without custom ssl cert)  -ssl3
CONNECTED(00000003)
write:errno=104
---
no peer certificate available
---
No client certificate CA names sent
---
SSL handshake has read 0 bytes and written 0 bytes
---
New, (NONE), Cipher is (NONE)
Secure Renegotiation IS NOT supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
SSL-Session:
    Protocol  : SSLv3
    Cipher    : 0000
    Session-ID: 
    Session-ID-ctx: 
    Master-Key: 
    Key-Arg   : None
    Krb5 Principal: None
    PSK identity: None
    PSK identity hint: None
    Start Time: 1564975196
    Timeout   : 7200 (sec)
    Verify return code: 0 (ok)
---


Wednesday, July 17, 2019

AWS: Why does Auto Minor Version Upgrade Flag not upgrade to latest minor Version of Database.

Shout out to Asmita V. for this answer.

------------------------------------------
PROBLEM DESCRIPTION
------------------------------------------

Why were the RDS PostgreSQL instances not automatically upgraded to the latest Minor Version available even though the Auto Minor Version Upgrade Flag is enabled for your instances.

------------------------------------------
RESPONSE
------------------------------------------

The reason why your instances was not upgraded to these minor version is that AMVU will only upgrade the engine version for your RDS instance if the current engine version is being deprecated or the new one contains very important cumulative bug fixes and an upgrade is absolutely necessary. 

Please note that while we highly recommend that you perform an upgrade to 10.9, this upgrade will not happen automatically as of now using AMVU as the automatic upgrades happen only when absolutely necessary and you can also view such actions using describe-pending-maintenance-actions command.

If there is an auto minor version upgrade scheduled as a maintenance, please be assured that you will get a separate notification explicitly mentioning the same. Currently, in this case the upgrade will have to be applied manually.

Further, at your end, you can check if the minor version upgrade will happen automatically or not by using the below CLI command:

$aws rds describe-db-engine-versions --output=table --engine postgres --engine-version 10.6

Output:
||+-------------------------------------------------------------------------------------------+||
|||                                    ValidUpgradeTarget                                     |||
||+-------------+---------------------+-----------+----------------+--------------------------+||
||| AutoUpgrade |     Description     |  Engine   | EngineVersion  |  IsMajorVersionUpgrade   |||
||+-------------+---------------------+-----------+----------------+--------------------------+||
|||  False      |  PostgreSQL 10.7-R1 |  postgres |  10.7          |  False                   |||
|||  False      |  PostgreSQL 10.9-R1 |  postgres |  10.9          |  False                   |||
|||  False      |  PostgreSQL 11.1-R1 |  postgres |  11.1          |  True                    |||
|||  False      |  PostgreSQL 11.2-R1 |  postgres |  11.2          |  True                    |||
|||  False      |  PostgreSQL 11.4-R1 |  postgres |  11.4          |  True                    |||
||+-------------+---------------------+-----------+----------------+--------------------------+||

As you can see from the above output, for 10.6 version "AutoUpgrade" column is marked as "False" for all minor version upgrade (either 10.7 or 10.9) . So, it has to be done manually. Please make sure to upgrade to the latest minor version  (10.9)  so that you wont be prone to any security vulnerabilities as per the following notice:

[+] https://www.postgresql.org/about/news/1949/

AWS: How should we upgrade PostgreSQL 10.6 to 10.9 version of CloudFormation-controlled RDS instances

Shout out to Asmita V. for this answer.

Upgrade RDS PostgreSQL instances from 10.6 to 10.9 and in this process you want to understand that whether setting the "AllowMajorVersionUpgrades" flag in the CloudFormation template be sufficient and if during the process, existing instances will get replaced.

------------------------------------------
RESPONSE
------------------------------------------

Upgrading the PostgreSQL instance from version 10.6 to 10.9 is a minor version upgrade and hence would not require you to change the value of the parameter "AllowMajorVersionUpgrades" in your cloudformation template.

In order to upgrade your instances from 10.6 to 10.9 you can make a modification in your CFN stack by just specifying the "Engine Version" as 10.9 instead of 10.6 in your template. There is no replacement of the existing instance and hence there will be no loss of data. The existing instances will go to a "Modifying" State.

In order to confirm this behavior, I tried upgrading resources in my test environment and following are my observations:

---------------------------------------------
Testing
---------------------------------------------
Please refer the following set of steps that I took in order to upgrade my instance to 10.9 from 10.6:

Sample Stack
============================

{
  "AWSTemplateFormatVersion" : "2010-09-09",
  "Description" : "PostgreSQL RDS Template ",
  "Resources": {
    "pgDB" : {
      "Type" : "AWS::RDS::DBInstance",
      "Properties" : {
        "DBName" : "mydb",
        "DBInstanceClass" : "db.t2.small",
        "AllocatedStorage" : "20",
        "Engine" : "postgres",
        "EngineVersion": "10.6",
        "MasterUsername" : "username",
        "MasterUserPassword" : "password",
        "AutoMinorVersionUpgrade": false
      }
    }
  }
}

Step 1: Make Changes to the existing template
=======================================

1. On the Stacks page of the AWS CloudFormation console, click the name of the stack that you want to update.  https://console.aws.amazon.com/cloudformation
2. Select the Template tab and select View on Designer.
3. Modify the CFN Stack : "EngineVersion": "10.9"
        {
          "AWSTemplateFormatVersion" : "2010-09-09",
          "Description" : "PostgreSQL RDS Template ",
          "Resources": {
            "pgDB" : {
              "Type" : "AWS::RDS::DBInstance",
              "Properties" : {
                "DBName" : "mydb",
                "DBInstanceClass" : "db.t2.small",
                "AllocatedStorage" : "20",
                "Engine" : "postgres",
                "EngineVersion": "10.9",
                "MasterUsername" : "username",
                "MasterUserPassword" : "password",
                "AutoMinorVersionUpgrade": false
              }
            }
          }
        }
4. Validate you template.
5. Select Save. Save your template to S3 bucket.
6. Click on Save and copy the URL.
       
Ex. https://s3-external-1.amazonaws.com/cf-templates-us-east-1/template1
       
Step 2: Create Change Set for Current Stack
=======================================

1. On the Stacks page of the AWS CloudFormation console, click the name of the stack that you want to update.  https://console.aws.amazon.com/cloudformation
2. Go to Stack Actions and select "Create change set for current stack".
3. Select "Replace current template"
4. Input the URL that was copied in the Step 1:6.
5. On the Review page, click on Create Change Set.
6. In the preview page, under the Changes you will notice "Modify" under the Action column.
7. Click on Execute.

You can now check on the RDS Console. The status of the RDS instance would have gone to "Upgrading".

However, please note that engine version upgrade (major or minor) is always associated with some amount of downtime.

Even if your DB instance is in a Multi-AZ deployment, both the primary DB instance and standby DB instances are upgraded. The writer and standby DB instances are upgraded at the same time, and you experience an outage until the upgrade is complete.

Therefore it is always recommended to plan your upgrades in the non business hours.

[+] Upgrading the PostgreSQL DB Engine for Amazon RDS - https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_UpgradeDBInstance.PostgreSQL.html

[+] Modifying a DB Instance Running the PostgreSQL Database Engine  - Settings for PostgreSQL DB Instances - https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ModifyPostgreSQLInstance.html#USER_ModifyInstance.Postgres.Settings

---------------------------------------------
Conclusion
---------------------------------------------
Therefore, as per my testing, I can confirm that there will be no replacement of the existing instance during the process of upgrading PostgreSQL instance from  v10.6 to 10.9.
You may also go ahead and follow the steps given above in order to upgrade your instances.

---------------------------------------------
REFERENCES
---------------------------------------------

[+] Updating Stacks Using Change Sets  - https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-updating-stacks-changesets.html

[+] Creating a Change Set  - https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-updating-stacks-changesets-create.html

Thursday, May 30, 2019

Cloudformation long form to short form variable substitution


In cloud formation, you don't need to inject Region or Account Id into your template unless you are referencing something external.

If you use Join's maybe look at swapping them for Sub if you are not adding anything or just dealing with a string e.g.

from
SourceArn: !Join [ '', ["arn:aws:execute-api:", Ref: AWS::Region, ":", Ref: AWS::AccountId, ":*"] ]

to

Wednesday, May 29, 2019

AWS Lambda with SSM Paramater store variables


So you have used Spring Cloud SSM access for elastic beanstalk and docker but want to get into lambda with same nice config setup.

Sadly Spring framework is a bit too heavy for lambda and they suggest Dagger 2 or Guice. This guide is not about static/dynamic wiring of beans together but on getting parameters into your beans.

In the old days you had to place all of your environment path or via a file in s3 which you had to parse. Now this was ok for simple things but it was not secure for secrets aka database passwords or other sub-systems external to aws.

So most people rolled their own kms decryption system when it loads in lambda, ok that's nice but its still not easy to test locally vs on the cloud.

This code was inspired by the spring-cloud-aws project. But without the spring bits. (yep this does not do the nice overlays with dynamic profile activation etc but its still a good steps away from having all of your properties on the environment path)


Do ensure you have iam policy to allow ssm access (here an excerpt from cfn), the SSMKey is the arn for the kms key to decrypt your paramaters (if they are encrypted, else you can drop this action)

  LambdaExecutionRole:
    Type: AWS::IAM::Role
    Properties:
      RoleName: !Ref LambdaRoleName
      AssumeRolePolicyDocument:
        Statement:
        - Action:
            - sts:AssumeRole
          Effect: Allow
          Principal:
            Service:
              - lambda.amazonaws.com
        Version: '2012-10-17'
      Path: /
      ManagedPolicyArns:
        - arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole
      Policies:
        - PolicyName:
            Fn::Join:
              - '-'
              - - Ref: Product
                - Application-Lambda-Policy
          PolicyDocument:
            Version: '2012-10-17'
            Statement:
              - Effect: Allow
                Action:
                  - ssm:DescribeParameters
                Resource: "*"
              - Effect: Allow
                Action:
                  - ssm:GetParameters
                  - ssm:GetParameter
                  - ssm:GetParametersByPath
                Resource:
                  - !Sub "arn:aws:ssm:*:*:parameter/config/${Service}"
                  - !Sub "arn:aws:ssm:*:*:parameter/config/${Service}/*"
                  - !Sub "arn:aws:ssm:*:*:parameter/config/${Service}_*"
                  - !Sub "arn:aws:ssm:*:*:parameter/config/${Service}_*/*"
              - Effect: Allow
                Action:
                  - kms:Decrypt
                Resource:
                  - Ref: SSMKey

Sunday, March 31, 2019

AWS AutoScalingGroup to Route53 update record function via Lambda

Sometimes you want to have an Auto Scaling Group keep a single server online and you don't want to worry about connecting EIP's to them or have them kept as a Pet which needs to be kept at all costs.

Or you need to allow UDP access which the Elastic Load Balancers (ELB) and Network Load Balancers (NLB) don't allow. This is for you.

What this does is listen to a Simple Notification Service (SNS) to any published events which the ASG would send for adding an instance to the pool or terminating an instance to the pool. It then queries the asg looking for the tag DomainMeta and then with the list of ec2 instances it goes and collects the public ip address and goes to the route53 zone that is recorded and updates the domain attached.

The tag should be in the format DomainMeta: : I.E DomainMeta : Z10MWC8V7JDDX1:www.mydomain.com
Where the first part is the hosted zone it needs to end the command to and the second part is the a record it is going to change.


This is based on the work that objectpartners.com did back in 2005. I've improved it to include security so that only one hosted zone is looked after or allows full account control if you are 100% in control of the tags on the ASG's.

This could easily be updated to include a coma delimited list on the tag to update multiple a records if required.

Please note: If the last instance is taken out of the pool the old ip address will be left since it route53 records can't be null/empty.