CategoryUncategorized

terraform bash shorcuts

Do you enjoy switching between us-east-1 and eu-central-1? If not here are some shortcuts which might make your day.

function change_region(){
  folder_name=`pwd`
  new_region=$1
  old_region=$2
  new_folder="${folder_name/$old_region/$new_region}"
  cd $new_folder
}

function production(){
  change_region 'tf_live_rifiniti_production' 'tf_live_rifiniti_development'
}


function development(){
  change_region 'tf_live_rifiniti_development' 'tf_live_rifiniti_production'
}

function us(){
  change_region 'us-east-1' 'eu-central-1'
}


function eu(){
  change_region 'eu-central-1' 'us-east-1'
}


How to give better examples

Giving an example with foo and bar is not a good idea because the brain needs to decode foo to something which is known what it is.

If you start with foo and bar, and the example is complicated in the end you will have in the head only foobars and nothing will be clear.

If you give the example with real-world names. Foo is Coffee, Bar is Cookie, then it is easy.

Here is another example from SQLAlchemy where they give an example for parent-chield relations with parent and chields.

For me, it is much clear if I work with some real-world objects like User has and belongs to many Alerts which has many Users and Subscriptions. Subscriptions have and belong to many users.

Many to Many

class Alert(Base):
tablename = 'alerts'
left_id = Column(Integer, ForeignKey('left.id'), primary_key=True)
right_id = Column(Integer, ForeignKey('right.id'), primary_key=True)
extra_data = Column(String(50))
subscription = relationship("Subscription", back_populates="users")
user = relationship("User", back_populates="subscriptions")
class User(Base):
tablename = 'left'
id = Column(Integer, primary_key=True)
subscriptions = relationship("Alert", back_populates="user")
class Subscription(Base):
tablename = 'right'
id = Column(Integer, primary_key=True)
users = relationship("Alert", back_populates="subscription")

Many to One

class Subscription(Base):
    __tablename__ = 'subscriptions'
    id = Column(Integer, primary_key=True)
    items = relationship("Item", back_populates="subscription")

class Item(Base):
    __tablename__ = 'items'
    id = Column(Integer, primary_key=True)
    subscription_id = Column(Integer, ForeignKey('subscriptions.id'))
    subscription = relationship("Subscription", back_populates="items")    

Rails free hosting

If you want free hosting, I mean free, not-cheap or other kinds of hosting. The only option which I know for now is Heroku.

There are some websites which advertise them as free rails hosting but they are paid.

So stay safe with the good old, slow on boot but very stable and enjoyable to work Heroku!

Akonadi KDE

I try to keep my system fast and to know what is running. I was surprised that today I found one mysql instance which was running without my explicit permission. This was the akonadi service.

Akonadi service is some kde tooling for keeping contacts, calendar and notes. Here is a full list of the apps

accountwizard akonadiconsole kalarm themeeditors kmail knotes konsolekalendar kontact korganizer ktnef

If you want to purge it from speed optimized linux do:

rm -r /home/guda/.local/share/akonadi
rm -rf /home/guda/.config/akonadi
rm -rf /home/guda/.kde/share/config/akonadi-firstrunrc

apt-get remove --purge akonadi-backend-mysql akonadi-backend-postgresql akonadi-backend-sqlite mariadb-server-core-10.3
pkill -f akona

Restore it back with sqlite database

I will try to run it for a while with sqlite database to see if there will be speed/ram problems

First change the database driver in settings to sqlite

mkdir /home/guda/.config/akonadi/
cat > /home/guda/.config/akonadi/akonadiserverrc
[%General]
Driver=QSQLITE3

[QSQLITE3]
Name=/home/guda/.local/share/akonadi/akonadi.db

[Debug]
Tracer=null

Then install some packages

apt install akonadi-backend-sqlite kdepim-runtime  kdepim
apt install kjots 

Then as YOURSELF run

akonadictl start

And if you want to see all the …apps using Akonadi run

akonadiconsole

Make tables visible on postgres after pgloader

Hello folks,

You have just used the pgloader to import some database in postgres but when you do

space_util=# \dt
Did not find any relations.

The problem is that you have to fix the search path for your tables.

Here is how to do it (or check the link for more ways)

space_util=# ALTER DATABASE space_util SET search_path = space_util,public;
space_util=# \dt
                                    List of relations
   Schema   |                           Name                           | Type  |  Owner   
------------+----------------------------------------------------------+-------+----------
 space_util | some_nice_table | table | postgres

How s3fs caching

…or one-day just to add one line of code

s3fs.S3FileSystem.cachable = False

Adding caching under the good and not mentioning it in the documentation – that is called a dirty trick.

My case was lambda processing s3 files. When a file comes on s3 lambda process the file and triggers next lambda. The next lambda works fine only the first time.

The first lambda is using only boto3 and there is no problem.

The second lambda use s3fs.

The second invocation of the lambda is using already initialized context and the s3fs thinks that it knows what objects are on s3 but it is wrong!

So…. I found this issue – thank you jalpes196 !

Another way is to invalidate the cache…

from s3fs.core import S3FileSystem


S3FileSystem.clear_instance_cache()
s3 = S3FileSystem(anon=False)
s3.invalidate_cache()

Daily systemd commands

You need to create a unit file.

Both timer and service/units must be enabled if you want to run them.

Reload

Must be done on change unit/timer files in /etc/systemd/system

 systemctl daemon-reload

View the logs

journalctl -u advanced.service 

The status of the unit/service

systemctl status advanced.service
systemctl status advanced.timer

Check the timers

systemctl list-timers --all

Why one should use Firefox in 2020

I have switched from Google Chrome to chromium for security and privacy issues. Now I am switching from Chromium to Firefox because of many issues.

Chromium stopped to ship deb packages and start using Snapd. Snap runs in cgroup (probably) and hides very important folders from the OS

  • /tmp
  • ~/.ssh

Certificates
My access to some payment website was rejected because the certificates are in the ~/.ssh

System tmp
When I download some junk files/attachments I store them in the /tmp folder and on the next system reboot, my /tmp is cleaned. When I can’t access the /tmp from Chrome I have started using ~/tmp/ and have tons of useless files.

Speed
When I switch to firefox I noticed that this browser is much faster than Chrome.

Chromuim after migrating to snapd do not work correctly with dbus

Firefox is faster

No easy way to add a custom search engine.

Sort AWS S3 keys by size

A naive version which will sort the keys in a s3 folder. It would not work if the keys contain spaces.

Here is a usage example

aws s3 ls BUCKETNAME/signals/wifi/  |  ~/bin/aws-s3-sort.rb
#!/usr/bin/ruby
content = ARGF.read

lines = content.split("\n")

key_size = {}
lines.each do |line|
  cells = line.split(' ')
  key_size[cells[2]] = cells[3]
end; ''

sorted = key_size.sort_by { |k, v| k.to_i }.to_h

sorted.each do |key, value|
  puts "#{key} -> #{value}"
end; ''

Extract a huge number of files from AWS s3 glacier

You can first try s3cmd and if it doesn’t work, go for an advanced solution which supports millions of files.

s3cmd restore \
    --recursive s3://bucket.raw.rifiniti.com \
    --restore-days=10

To bulk request files to be extracted from glacier I use this script. I hope that will be useful to you also

#!/bin/bash
#
# Get s3 objects from glacier by prefix
# The prefix is optional!
#
# How to use:
#  ./export-prefix.sh bucketName 30 2019-04-30
#  ./export-prefix.sh bucketName 30
#
#
export bucket=$1

# How many days to keep the objects
export day=$2
export prefix=$3

if [ -z "$prefix" ]
then
  cmd="aws2 s3api list-objects  --bucket $bucket"
else
  cmd="aws2 s3api list-objects  --bucket $bucket --prefix $prefix"
fi

readarray -t KEYS < <($cmd | jq '.Contents[] |  select( .StorageClass != "STANDARD" ) | ."Key"')
for key in "${KEYS[@]}"; do
  echo "aws s3api restore-object --bucket $bucket --key ${key} --restore-request '{\"Days\":$day,\"GlacierJobParameters\":{\"Tier\":\"Standard\"}}'" >> /tmp/commands.sh
done

echo "Generated file /tmp/commands.sh"

echo "Splitting the huge file into small files: /tmp/sub-commands*"
split -l 1000 /tmp/commands.sh /tmp/sub-commands.sh.
chmod a+x /tmp/sub-commands*


The script will generate in /tmp/commands.sh file with all the commands that you need to run.

When you have a lot of files it would be not possible to run the bash script because it would be killed at some point. To avoid this, we have to split the /tmp/commands.sh into parts. This is what the last part of the shell script is doing.

Now use this snippet to run the commands file by file.

for x in `ls /tmp/sub-commands*`; do
  echo "working on $x"
  `$x`
done

Or if you have installed “parallels” you can run much faster with

for x in `ls /tmp/sub-commands*`; do
  echo "working on $x"
  parallel -j 10 < $x
done

Update: Make the script work with keys containing spaces

Update2: Make it work with a lot of files and add parallel example

© 2020 Gudasoft

Theme by Anders NorénUp ↑