Upgrade Single Package in Ubuntu

Refer: upgrade-single-package

As usual you need to fetch an updated index from the Internet:

sudo apt-get update

Now upgrade a package, where Package is whatever you want.

sudo apt-get --only-upgrade install Package
sudo apt-get install Package

Bonus, see list of packages available for upgrade

sudo apt-get update
sudo apt list --upgradable

Using Dolphin Browser

The Dolphin browser has a nice integration with the terminal, but you need to install the addtional plug-in.

sudo apt-get install dolphin
sudo apt-get install dolphin konsole
sudo apt-get install ark

Simple Example to Avoid Writing as Root to Host System

Simple way to write to host as current user

Refer: docker-shared-permissions

docker create --name ubuntu1804 \
  --net=host \
  -v ${PWD}:/home \
  --user "$(id -u):$(id -g)" \
  -it mruckman/ubuntu1804:201001

docker start ubuntu1804
docker exec -it ubuntu1804 /bin/bash
docker stop ubuntu1804
docker rm ubuntu1804

Or you can build you container where you pass in the credentials on the host machine.

Build the right image
Now it gets more interesting. Here is how you can build, configure and run your Docker containers correctly, so you don’t have to fight permission errors and access your files easily.

As you should create a non-root user in your Dockerfile in any case, this is a nice thing to do. While we’re at it, we might as well set the user id and group id explicitly.

Here is a minimal Dockerfile which expects to receive build-time arguments, and creates a new user called “user”:

FROM ubuntu

ARG USER_ID
ARG GROUP_ID

RUN addgroup --gid $GROUP_ID user
RUN adduser --disabled-password --gecos '' --uid $USER_ID --gid $GROUP_ID user
USER user

Refer: add-user-to-docker-container for more info on adduser

We can use this Dockerfile, to build a fresh image with the host uid and gid. This image, needs to be built specifically for each machine it will run on to make sure everything is in order.

Then, we can run use this image for our command. The user id and group id are correct without having to specify them when running the container.

docker build -t myimage \
  --build-arg USER_ID=$(id -u) \
  --build-arg GROUP_ID=$(id -g) .

docker run -it --rm \
  --mount "type=bind,src=$(pwd)/shared,dst=/opt/shared" \
  --workdir /opt/shared \
  myimage bash

No need to use “chown”, and no annoying permission errors anymore!

Using Python Virtual Environments

Refer: https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/

Install on Ubuntu 18.04

sudo apt-get install python3-venv

Create a virtual environment

python3 -m venv env

Activating a virtual environment

source env/bin/activate

Confirm you are pointing to your virtual environment

which python

You should seem something like

.../env/bin/python

Leaving the virtual environment

deactivate

Installing packages, when inside virtual environment

python3 -m pip install requests

Installing specific versions

python3 -m pip install requests==2.18.4

Upgrading packages

pip can upgrade packages in-place using the --upgrade flag. For example, to install the latest version of requests and all of its dependencies:

python3 -m pip install --upgrade requests

Using requirements files

Instead of installing packages individually, pip allows you to declare all dependencies in a Requirements File. For example you could create a requirements.txt file containing:

requests==2.18.4
google-auth==1.1.0

Tell pip to install all the packages in this file using the -r flag:

python3 -m pip install -r requirements.txt

Freezing dependencies

Pip can export a list of all installed packages and their versions using the freeze command:

python3 -m pip freeze

Which will output a list of package specifiers such as:

cachetools==2.0.1
certifi==2017.7.27.1
chardet==3.0.4
google-auth==1.1.1
idna==2.6
pyasn1==0.3.6
pyasn1-modules==0.1.4
requests==2.18.4
rsa==3.4.2
six==1.11.0
urllib3==1.22

This is useful for creating Requirements Files that can re-create the exact versions of all packages installed in an environment.

Fix Docker DNS Cache behind VPN

It seems like Docker is caching the host's /etc/resolv.conf as it was when docker started up. So I restarted Docker after connecting to the VPN and now containers are able to resolve hosts that are in m company's DNS.

If you are developing on Ubuntu, you will need to use "--net=host" to work behind the VPN, such as this:

docker run --rm \
  -v ${PWD}:/home \
  -it \
  --net=host \
  mruckman/selenium_python:212301 /bin/bash