Cannot connect to SMB shares on Windows 10

I recently set up a new Windows 10 machine. After eight years with only Apple devices, I finally wanted to fetch up with the PC and Windows world again.

For a day or two, I tried to connect my laptop to my NAS at home. I checked firewalls, credentials, server settings, usernames, network. I checked it double, triple, quadrupplewise. I tried almost any permutation. Eventually, I gave up.

The actual problem was, Windows 10 gave no feedback at all when trying to connect to a SMB (aka Sever Message Block Protocol) share on my server. All connection attempts just ended with a silent fail. In terms of user experience this is a violation of Grice’s maxims. Windows 10 simply chooses to opt out of the conversation.

At a very last attempt, I tried the option to map a network drive. After entering user credentials again and again, finally Windows 10 came up the very first time with a useful error message.

Error while connecting SMB1 shares on Windows 10

This shares requires the obsolete SMB1 protocol…” is quite some information one can work with.

Enabling SMB1 turns out to be quite easy. Head to Turn Windows Features on or of and scroll down to SMB 1.0/CIFS File Sharing Support. There check SMA 1.0/CIFS Client to enable SMB1 support.

SMB 1.0/CIFS Support on Windows 10

Once done, connection to servers providing (only) SMB1 will work again on Windows 10.

Have You Been Pwned?

Another way to figure out if your account and corresponding account data is https://haveibeenpwned.com/

Have I Been Pwned 

Beside the web form an RESTful API is provided to check automatically. Right now 6,474,028,664 accounts are listed from about 340 hacked websites. Also a list of the breaches, the data comes from is provided. All together it is an easy way to check if your digital identity was recently stolen.

Collection #n

After Collection #1 it did not took long until additional sets fo leaked account and password information appeared. Meanwhile there are Collection #2 to Collection 5. 

All together there are more than 8,000,000,000 are meanwhile leaked. While I accept and actually think of systems being hacked at one point – remember it is not about the if, it is about the when – I cannot understand how actual passwords are stored. 

As I did design a large multi-user system some years ago, we did not save clear text passwords in the system. We actually did even not transport the password from the client to the server in plain text. Said that, I still try to image how anyone could even think of storing passwords in plaintext. 

If you are interested, if any of your password are leaked, you probably should check theIdentity Leak Checker service provided by the Hasso-Plattner-Institute

HPI Password Leak Checker

I actually checked three mail addresses I usually use to sign in at various services.

Leaked Passwords #1

As this is a mail address I don’t use to sign in at public services a lot, the result was not very surprising. Actually, that was I found an account to delete. For my second account this does not look that well. The mail address (and probably passwords) appear in Collections #1 to 2. 

Leakd Passwords #2

The same actually is true for my third and last address I do use for public services. 

Leaked Passwords #3

While I do reset passwords from time to time, it still is worrying that so many passwords have been leaked. I probably will change some passwords of my major accounts as well as I will delete some accounts I really won’t use anymore – or even have never used such as a MySpace account, I completely forgot about.

That way, the HPI Identity Leak Checker might help also to figure about forgotten accounts worth closing. 

 

Blogging again with MarsEdit

Since WordPress cam up with the new editor, actually, writing does not spark fun anymore. Actually, I do not know why I do not like the new editor. Therefore, I recalled MarsEdit, which I used quite some time ago.

MarsEdit 4

I am still not disappointed by the editor. Connection the WorPress installation worked like a charm. Looks like I can start writing blog articles again in a “traditional” manner. 

MarsEdit 4 Editor

Said that, this is going to be the first article written with MarsEdit for a long time.

Blue Dragon Smoothie

In my current position, my co-workers are very disrespectful considering my time. Therefore meetings are often scheduled over lunchbreak or they intercept one in front of the elevator absorbing your lunch break starting with the words “Do you have a few moments…?”

Eventually, I start to bring in my own kind of “fast food” in the form of smoothies. As I am very bad in memorizing recipes, I started writing them down in my blog. Feel free to experiment and comment on them.

For the logistics I started to recycle true fruits bottles which are available in various sizes.

So my first try, I call Blue Dragon with the following ingredients:

  • 1 handful of frozen or fresh blueberries
  • 2 apples
  • 2 bananas
  • 1 carrot
  • 1 slice of honey melon
  • 1 tbsp of almond butter
  • 100 ml of almond milk
  • some water

As tomorrow is my first working day after the christmas brea, I am looking this one is getting me through the day…

Twothousandandeighteen

2018 was not bad at all. Here are some of the major achievements and events of last year. Quite a few things happened during the last year.

  • After I bought a 4K TV, I finally got my XBOX One X as there was a great deal at Amazon. Finally, I was able to fully utilize my TV.
  • In February, I started teaching Interactive Systems and MCI at the Baden-Württemberg Cooperative State University in Karlsruhe. It looks like the course went quite well as the University decided to offer it a second time during the autumn where I gave the lecture a second time. As a fun fact, they forgot to tell me until I got a note some days before the course was scheduled.
  • In May I started teaching a second lecture covering data structures and algorithms, also at the Baden-Württemberg Cooperative State University in Karlsruhe which will be given annually.
  • As a result of the lecture, I started to write a book covering this topic using .NET Core. However, I am afraid I won’t be able to finish it in 2019.
  • I also was appointed as a member of the examination board of Baden-Württemberg Cooperative State University in Karlsruhe where I advised two students writing their bachelor thesis.
  • Also in May, I became a certified Product Manager after becoming a Certified Scrum Product Owner in 2017. The course, given by Product Focus based in London, UK is highly recommended.
  • Our son celebrated his very first birthday and started walking this year. Not really my personal achievements but something you are proud of as a dad.
  • In September we switched from HomeKit to Amazon Alexa and changed meanwhile almost all our Lights to Hue and IKEA TRÅDFRI.
  • There is only a little I was able to achieve in my day job, however, our team managed to finish a five-year multi-million migration.
  • As some side projects, I started learning Ansible as well as Docker and moved my 10-year-old handcrafted, homebrewed server to the latest generation of automated container based version.
  • I also started again blogging, especially about my experiences with Docker, Ansible and further Automation.
  • Unfortunately, our rabbits died by the rabbit hemorrhagic disease. Actually, this was a rather sad event. The vaccine is not easily available for this particular and very variant of the virus. It was an event I learned a lot about vaccines, viruses as well as diseases at all.
  • Also, it seems as we experienced one of the longest droughts in Germany. We did not have any rain for weeks or even months.
  • In December, I was appointed at a teaching assignment of Software Engineering of Complex Systems at the University of Heilbronn. I will start this lecture in February ’19 as well as a lab covering software development. I am really looking forward for these new courses.
  • All over the year, I suffered from overall sleep deficiency. I wish I would know where our 16mo takes his energy from.
  • One highlight was definitely visiting my former manager and old friend from Microsoft Research who meanwhile is head of a university.
  • Eventually, I started using Twitter again.

Although I thought 2018 was a rather boring year, it looks as there was a lot going on. Unfortunately, I was not able to read that many books as I am used to, but his might change in 2019. So I am looking forward to an exciting next year and many new things to learn.

Proper Logwatch Configuration using Ansible

On my way setting up a proper monitoring for my server, I just installed Logwatch to receive a daily summary of the what happened recently on the machine. I will have a look into Prometheus, Grafana, Icinga etc. later. However, for now I just wanted a quick summary of the daily “what’s going on on the machine”. Eventually, I had to fix an occur No such file or directory error.

Therefore, I decided to use Logwatch as a lightweight solution to my needs.

Installation Script

The Ansible script to install Logwatch is straight forward:

- name: Install logwatch
apt:
name: logwatch
state: latest
tags:
- logwatch

- name: Create logwatch.conf file for customisations
file:
path: /etc/logwatch/conf/logwatch.conf
state: touch
tags:
- logwatch

- name: E-Mail to
lineinfile:
dest: /etc/logwatch/conf/logwatch.conf
regexp: "^MailTo ="
line: "MailTo = {{ logwatch_email }}"
state: present
tags:
- logwatch

- name: Set detail
lineinfile:
dest: /etc/logwatch/conf/logwatch.conf
regexp: "^Detail ="
line: "Detail = {{ logwatch_detail }}"
state: present
tags:
- logwatch

Configuration & Troubleshooting

I basically set up two parameters, the e-mail as well as the detail level I want for the report. Important to know is the order Logwatch is applying your configuration settings. Following the recommendations, I did not change anything in the configuration file at

/usr/share/logwatch/default.conf/logwatch.conf

rather I decided to copy the file to

/etc/logwatch/conf/

The reason is the order, logwatch is scanning for configuration parameters in the following order. Each step actually overwrites the previous one.

  • /usr/share/logwatch/default.conf/*
  • /etc/logwatch/conf/dist.conf/*
  • /etc/logwatch/conf/*
  • The script command line arguments

Eventually, I ended up in the following error:

/etc/cron.daily/00logwatch:
/var/cache/logwatch No such file or directory at /usr/sbin/logwatch line 634.
run-parts: /etc/cron.daily/00logwatch exited with return code 2

To fix this, avoid copying the original configuration to one of the other places. I did this because I followed some recommendation I received. Instead, I now touch a new configuration file as well as setting the two parameters for MailTo= as well as Detail=. Both are s set using Ansible variables in my scripts. The additional configuration file now looks pretty boring, though:

MailTo = mail@example.org
Detail = Low

You also can provide these parameters when calling the script in the cron job: Using Ansible the modification would look like the following:

lineinfile: 
dest: /etc/cron.daily/00logwatch
regexp: "^/usr/sbin/logwatch"
line: "/usr/sbin/logwatch --output mail --mailto {{ logwatch_email }} --detail {{ logwath_detail }}"
state: present
create: yes

I decided to change the cron job call as one never can be safe from the file changing during package updates. The same should be valid for the configuration file at its origin place.

tl;dr

Setting up Logwatch using Ansible might cause strange “No file or directory”-errors during the cron job call. This can be avoided by applying additional configuration settings at appropriate configuration locations.

Personal DevOps #3

While most of the prerequisites are met for my automated server setup I came across some issues when I started with my very first Ansible playbooks.

First Ansible Playbooks

First of all, I wanted to start with a quite simple ping playbook, to ensure the servers are reachable by Ansible.

# Playbook to ping all hosts 
---
- hosts: all
  gather_facts: false
  tasks:
    - ping:

When I run this script I was immediately confronted with the very first error. I really love when such things happen. Nothing can motivate one more than immediate failures like the following.

FAILED! => {
"changed": false,
"module_stderr": "Shared connection to xxx.xxx.xxx.xxx closed.\r\n",
"module_stdout": "/bin/sh: 1: /usr/bin/python: not found\r\n",
"msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
"rc": 127
}

As I started with a minimal Ubuntu 18.04 LTS installation, there is simply no Python 2 installed. However, to run the Ansible tasks on the node, Python is required. I made use of the raw task in Ansible to update the package lists as well as install the package python-minimal. In addition, I added the package python2.7-apt in this bootstraper as it is needed later on. Once Python has been installed the ping playbook worked without any problems.

# Bootstrap playbook to install python 2 and python-apt
# It checks first so no unecessary apt updates are performed
---
- hosts: all
  gather_facts: False
  
  tasks:
  - name: install python 2
    raw: test -e /usr/bin/python || (apt -y update && apt install -y python-minimal)
  - name: install python-apt 
    raw: test -e  /usr/lib/python2.7/dist-packages/apt || (apt install -y python2.7-apt)

For both packages, I test for the corresponding directories on the node to avoid unnecessary updates.

Note: When testing for a directory on the shell the following line became very handy:

> [ -e /usr/lib/python2.7/dist-packages/apt ] && echo "Found" || echo "Not found"

At a second step, I created a maintenance playbook to update and upgrade the packages on my node.

# Playbook to update Ubuntu packages 
---
- hosts: all
  gather_facts: false
  tasks:
  - name: update and upgrade apt packages
    become: true
    apt:
      upgrade: yes
      update_cache: yes
      cache_valid_time: 86400

Before including the pyhton-apt package to the bootstraper, I got the following error when dry running the playbook.

fatal: [xxx.xxx.xxx.xxx]: FAILED! => {"changed": false, "msg": "python-apt must be installed to use check mode. If run normally this module can auto-install it."}

Conclusion

While this is not any rocket science for sure, I now have a few essential scripts to bring my server to a base level I can start working with.

Personal DevOps #2

Even though, I wanted to do this structured, beginning with this topic is quite a mess. You have to set up everything and pull all pieces together before one can start working properly.

Getting a Server

First of all, I picked up a new root server. I will install Ubuntu 18.04 LTS on this particular one. I found a great offer at Netcup, where I just ordered a new root server with unlimited traffic.

That way, I can start moving bit by bit from my old, handcrafted and home-brewed server to the new one instead of replacing the old server. Once everything works fine, I will migrate the data and change the DNS entries to the new server.

macOS as Ansible Control Machine

When starting with such a project, one should expect problems from the very first moment. For me, it started already when I wanted to install Ansible on macOX. Why I’ve chosen Ansible over Chef and Puppet will be covered later.

There is no native Ansible package for macOS available. Instead, you can use Homebrew to do so. Assuming Homebrew is already installed, the following command should do the job.

> brew install ansible

However, for me, it already ended in some errors:

xcrun: error: invalid active developer path (/Library/Developer/CommandLineTools),
missing xcrun at: /Library/Developer/CommandLineTools/usr/bin/xcrun

As most of the time the latest macOS version Mojave might be the reason, running

> xcode-select --install

the command line tools will be installed as part of Xcode. Once accomplished, the homebrew command should work like a charm. Eventually, Ansible is available on my local laptop which is now my control machine.

Setting up the SSH Access

Usually, I deactivate all password-based logins on a server and allow only RSA-key-based logins. To generate the key, you can follow the instruction on ssh.com.

To upload the key, ssh-copy-id is needed. However, once again this is not available on macOS and you have to install it using

> brew install ssh-copy-id

Now you can upload the key using

> ssh-copy-id -i ~/.ssh/key user@host

So far I have planned this was the only step necessary on the server. All future settings, including deactivating password only access should be set using Ansible scripts.

Setting up a Repository

As a (former) developer, I almost don’t do anything without putting my stuff into a revisioning system. For my Ansible scripts, I decided to set up a private GitHub repository. Although I use GitLab and Subversion at the moment at work as well as running a Subversion server at home, I meanwhile put almost everything int GitHub repositories. Therefore Github comes just in quite handy.

Why Ansible?

For automation, there are several frameworks. One major advantage of Ansible is the fact, only a single so-called control machine is needed. There is no further Ansible infrastructure needed. Once you have installed Ansible as described above you are ready to go. For Puppet, you need again to deal with a server and agents an eventually you stick with running daemons. While this is a feasible approach e.g. to manage my team’s 100 servers at work, this is an overkill for personal use. Same with Chef. That’s the reason, I decided to use Ansible as it seems a quite feasible approach for personal use with little overhead while being an approach with the least attack vectors.

If not being familiar with ansible, I recommend watching the following 14-minute video, which gives you a good overview of Ansible’s capabilities. You might want to skip the last four minutes as it deals with Red Hat’s Ansible Tower which targets enterprises.


Personal DevOps #1

I recently gave a talk, introducing DevOps how to add value by fully automated toolchains. Following this at my daily work, I realized, that my very personal servers are managed overall manually. Every single change is an epic battle. Therefore, I decided to get my hands dirty and to set up a fully automated toolchain for my personal server.

Why!?

I want to learn about the tools, the automation and the processes necessary. Maybe you ask yourself, why one should do this. I simply do this to fetch up with the current technology development. I meet a lot of technical managers who actually have no clue about the technology they are talking about. Personally, I don’t want to be one of those managers.

On the other hand, I have to move to a more recent version of my server’s OS but I don’t want to set up the server again by hand.

About one year ago, I had to re-install the server completely after a configuration mistake, three months ago I had to spend hours and hours to manually clean my WordPress installations due to an infected site.  So there is an actual need as well.

I decided to blog about this adventure, which is especially interesting, as the blog itself has to move at one point to the new server. As usual, I blog to keep notes for myself. However, if you find the articles helpful, feel free to drop me a line, recommendations and tips are warmly welcomed.

Goals

I thought about some goals I want to achieve. Ultimately, I want a fully automated deployment pipeline for at least my main server. To achieve this, I want to

  • upgrade to a new version of my server’s OS
  • apply automation scripts such as Puppet, Chef or Ansible
  • containerize everything – at least Mail and Web-Server have to be deployed independently
  • automatically setup and create my WordPress instances
  • set up a mechanism to easily deploy .NET Core Web APIs in containers
  • create a single place where all “data” lives
  • create a mechanism to backup the data
  • establish monitoring and alerting push changes through the toolchain fully automated
  • run everything through repositories and  a CI/CD pipeline

tl;dr

I am going to replace my server and building a fully automated CI/CD pipeline.