Tag Archives: Linux

Adventures in Plasma Land

Or, can a man fall in love with KDE, 20 years later

KDE has never been able to capture my heart. I remember trying KDE 1.1 or so on Madrake Linux 6 in the late 90s. It just never clicked for me. I opted instead for Enlightenment. Ever since then, I’ve tried it every year or so to see if I could understand peoples’ love for it. I didn’t fall for KDE 3.5 that so many people remember fondly, or KDE 4, which people recall much less fondly. I’ve peeked in on KDE Plamsa 5 during it’s development, but it never was able to bring me in. But here I am, in 2019, about 20 years since I started using Linux, and I’ve giving KDE Plasma 5.16.4 a go!

Background

So, why now? Well, as I said, I try KDE every now and then. Something about it always draws me in, before turning me off again. I recently ended up down an internet rabbit hole following articles on Plasma mobile, Qt Python bindings and even Qt C# bindings for .Net Core (and I love me some C#). I wondered: “Could Plasma, Plasma Mobile, Qt, and C# be the epic combo of my dreams?” Let’s find out!

Setup

I’m running KDE Plasma 5.16.4 on KDE Neon Linux. Neon is based on Ubuntu 18.04 LTS using KDE’s own, and constantly updated, repos for Plasma itself. I figured as long as I’m giving a fair shake, I ought to go right to the source. (As I’ve written this article, I’ve upgraded across a few versions of 5.x Plasma)

I’m running it on my trusty desktop with a Core i7 920, AMD Radeon RX 560, 12 GB RAM, a 512 GB SSD, and a HiDPI monitor at 3840×2160. It’s my daily driver at home for general computing, gaming, and game development.

I’m also running the X11 version of Plasma, rather than Wayland. I did some testing and Wayland seems to be particularly fluky with AMD graphics, though remarkably stable on the Intel-graphics based laptop I tested on. Your mileage may vary.

KDE System Info

Window Dressing

At work I use a Mac, so having the close, minimize and maximize icons on the left just keeps my flow going. I really thought this was going to be one of those “sorry, can’t do with Plasma” things. Oh, how wrong I was! In face, Plasma lets you customize all the icons on title bar.

Titlebar icon placement is handled by the “Look and Feel of Windows Titles” settings menu.

Bluetooth

I was able to connect a Bluetooth Microsoft Arc Mouse and my Apple AirPods without any difficulty whatsoever. Major bonus in my book!

Scaling

One of things I love about Plasma is the decimal-based resolution scaling. Wheres the GTK-based desktops I’ve used require scaling at 1x, 2x, 3x, etc., Plasma allows you too choose, for example, 1.5x. This is a huge improvement for HiDPI displays.

The caveat is, you’re probably not going to be running all Qt apps. Invariably, you’ll also run some GTK apps as well. These will ignore your scaling. This was the case for me with Unity Editor.

Luckily, there’s an easy fix!

Open kmenueditor

Find the app you need to scale

Prefix the command with: GDK_SCALE=2

Multiple Displays

While this isn’t an issue for me on my main computer, I did want to see how Plasma handled multiple monitors in case I’m able to get a 2nd display at home someday. Using a test laptop with Intel-based graphics I had no problem at all running Plasma with two 4K monitors daisy chained with DisplayPort.

Extra Surprises

PSD Previews

One of the things that drives me crazy about Nautilus is that it doesn’t support the preview of PSD files out of the box. Perhaps there’s a plugin or setting somewhere, but not that I could find. I was thrilled to open a folder with a whole slew of PSD files and see previews of them working by default.

Latte Dock

If you’re a lover of docks like I am, I can’t recommend Latte Dock highly enough! Latte Dock is integrated beautifully into the Plasma ecosystem, with all the fun stuff like pinning and app actions. For example, the Spotify app will give you playback options right from the dock icon.

Issues

Media over SMB

Dolphin (KDE’s file manager) does fantastic job of browsing SMB shares. It’s handy to be able to view shares without actually mounting them, but there’s quite a few drawbacks. Most frustrating was getting media to play when double clicking the file. I was eventually able to get it to work with VLC by using the snap version and dragging/dropping the file into the VLC window, but this still required me putting in my username and password for each video. My recommendation: mount the share, and everything works fine. I found smb4k recommended in a forum for this, and it does a fantastic job. Just make sure to exclude it’s default mount point, ~/smb4k, from your backup jobs.

Discover

Discover is Plasma’s app installation and system update tool. It’s gotten much better over the years (and even since I began writing this article), but can still be finicky.
For example: if I search for ‘kmenu,’ I get nothing. It’s not until I search for ‘kmenuedit’ that I get a search result. I just seems by now that I should be able to do a partial search and good results.

Wherefore art thou kmenuedit?
Oh, there you are.

I will say this about Discovery though, it’s ability to handle both apt and snap versions of packages is very convenient!

KRDC

KRDC is a Qt-based remote desktop app for Plasma. It works great on a regular-resolution displays, but has some strange scaling issues for me on a HiDPI display and AMD graphics. I like to have a bunch of remote desktop sessions open at once, so I’ll typically have the remote desktop display be the current size of the client window. This works great in Remmina (the GTK equivalent of KRDC), but with KRDC I can never get it working quite right. (See below)

My old Windows VM and Plex server, before I migrated it to Ubuntu

Verdict

I’m sold! I started this article about four months ago wondering when I’d switch from Plasma back to Budgie. Now, I can say without a doubt that Plasma will remain my desktop of choice for the foreseeable future. Great job Plasma team!

ZFS on Ubuntu server

Ubuntu Home Server Setup Part II

Welcome to Part II of my Ubuntu Home Server build! In Part I, I did a very basic Ubuntu Server install. In this part, I’ll be creating a ZFS pool and volumes to store all my data on.

Other parts of this guide can be found at:

Home Server With Ubuntu

Setup

I’ll be setting up a server with 8 physical drives.

Disk 0: SSD for OS

Disk 1: SSD for ZFS Intent Log (improves write performance)
(read fantastic information about it here: http://nex7.blogspot.com/2013/04/zfs-intent-log.html)

Disk 2: SSD for L2ARC caching (improves read performance)

Disk 3 – 7: HDDs for ZFS Pool (where all my data with be stored)

Quick disclosure: I’m *far* from a ZFS expert. From what I’ve gleaned, this should suffice for home / small business use. If you’re planning something enterprise-grade, find an expert!

Install Ubuntu

Perform a regular Ubuntu server installation, or use an existing server.

SSH Into the server, rather than using the console. You’ll want to be able to copy and paste when you setup the zpool.

Install ZFS

sudo apt install zfsutils-linux

Create the ZPOOL

I’ll be using RAIDZ (which is like RAID-5) to get redundancy on my disks without losing too much usable space.

ZFS offers many other options, like RAID0, 1, 6, etc. Use whichever is appropriate for your workload.

It is very strongly recommended to not use disk names like sdb, sdc, etc. Those might change across reboots.

Many of the articles I’ve read suggest using UUIDs . However, my experience on Ubuntu Server is that these are not assigned to blank disks. Therefore, I will be using disk paths instead.

These are verbose and a bit of a pain to type, but they make sure you know exactly what disk you are referring to should you need to swap drives in the future. They will also not change on reboots.

To see your installed disks run:

ls -lh /dev/disk/by-path

My output looks like

adam@normandy:~$ ls -lh /dev/disk/by-path
 total 0
 lrwxrwxrwx 1 root root  9 Jul  8 09:06 pci-0000:00:1f.2-ata-5 -> ../../sr0
 lrwxrwxrwx 1 root root  9 Jul  8 09:06 pci-0000:02:00.0-scsi-0:0:0:0 -> ../../sda
 lrwxrwxrwx 1 root root 10 Jul  8 09:06 pci-0000:02:00.0-scsi-0:0:0:0-part1 -> ../../sda1
 lrwxrwxrwx 1 root root 10 Jul  8 09:06 pci-0000:02:00.0-scsi-0:0:0:0-part2 -> ../../sda2
 lrwxrwxrwx 1 root root 10 Jul  8 09:06 pci-0000:02:00.0-scsi-0:0:0:0-part3 -> ../../sda3
 lrwxrwxrwx 1 root root  9 Jul  8 09:06 pci-0000:02:00.0-scsi-0:0:1:0 -> ../../sdb
 lrwxrwxrwx 1 root root  9 Jul  8 09:06 pci-0000:02:00.0-scsi-0:0:2:0 -> ../../sdc
 lrwxrwxrwx 1 root root  9 Jul  8 09:06 pci-0000:02:00.0-scsi-0:0:3:0 -> ../../sdd
 lrwxrwxrwx 1 root root  9 Jul  8 09:06 pci-0000:02:00.0-scsi-0:0:4:0 -> ../../sde
 lrwxrwxrwx 1 root root  9 Jul  8 09:06 pci-0000:02:00.0-scsi-0:0:5:0 -> ../../sdf
 lrwxrwxrwx 1 root root  9 Jul  8 09:06 pci-0000:02:00.0-scsi-0:0:6:0 -> ../../sdg
 lrwxrwxrwx 1 root root  9 Jul  8 09:06 pci-0000:02:00.0-scsi-0:0:7:0 -> ../../sdh

I chose to install Linux on my 1st drive (sda). I’ll be using sdb for the ZIL, sdc for L2ARC, and sdd, sde, sdf, sdg, and sdh to for the data pool.

First, I’ll setup the data pool. This is where SSH is handy, since you can copy/paste your paths from above.

In my example below, I’m naming my pool “data.” You can use a different name if you’d like. If your setup is like mine, you’ll create one pool with many volumes in it.

I’m using drives with 4k physical sectors, so I’m adding the option: -o ashift=12
This should increase performance, but at the cost of total storage space. You an remove this option if you don’t think it’s a good fit for you.

sudo zpool create data -o ashift=12 raidz /dev/disk/by-path/pci-0000:02:00.0-scsi-0:0:3:0 /dev/disk/by-path/pci-0000:02:00.0-scsi-0:0:4:0 /dev/disk/by-path/pci-0000:02:00.0-scsi-0:0:5:0 /dev/disk/by-path/pci-0000:02:00.0-scsi-0:0:6:0 /dev/disk/by-path/pci-0000:02:00.0-scsi-0:0:7:0

To confirm this worked, run:

zpool list

You should have something like:

adam@normandy:~$ zpool list
NAME   SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
data  18.1T   238K  18.1T         -     0%     0%  1.00x  ONLINE  -

Next I’ll tell ZFS to use sdb as the ZFS Intent Log

sudo zpool add data log /dev/disk/by-path/pci-0000:02:00.0-scsi-0:0:1:0

Then I’ll tell ZFS to use sdc as the L2ARCH cache

sudo zpool add data cache /dev/disk/by-path/pci-0000:02:00.0-scsi-0:0:2:0

If I run zpool status, I should see my data, ZIL, and cache drives

adam@normandy:/data/download/secure$ zpool status
  pool: data
 state: ONLINE
  scan: none requested
config:

        NAME                               STATE     READ WRITE CKSUM
        data                               ONLINE       0     0     0
          raidz1-0                         ONLINE       0     0     0
            pci-0000:02:00.0-scsi-0:0:3:0  ONLINE       0     0     0
            pci-0000:02:00.0-scsi-0:0:4:0  ONLINE       0     0     0
            pci-0000:02:00.0-scsi-0:0:5:0  ONLINE       0     0     0
            pci-0000:02:00.0-scsi-0:0:6:0  ONLINE       0     0     0
            pci-0000:02:00.0-scsi-0:0:7:0  ONLINE       0     0     0
        logs
          pci-0000:02:00.0-scsi-0:0:1:0    ONLINE       0     0     0
        cache
          pci-0000:02:00.0-scsi-0:0:2:0    ONLINE       0     0     0

errors: No known data errors

Create the Filesystem

Now that the zpool exists, we can create filesystems on top of it.
A pool can have multiple filesystems. I’ll create one for media, and one for virtual machines (because that’s what I need).

sudo zfs create data/media
sudo zfs create data/vm

To confirm it was created correctly run:

zfs list

And it should look something like this:

adam@normandy:~$ zfs list
 NAME         USED  AVAIL  REFER  MOUNTPOINT
 data         210K  14.0T  36.7K  /data
 data/media  35.1K  14.0T  35.1K  /data/media
 data/vm     35.1K  14.0T  35.1K  /data/vm

I would also suggest the following tweaks. Combined, they increased my zfs throughput 50-100%! They were recommended by https://unicolet.blogspot.com/2013/03/a-not-so-short-guide-to-zfs-on-linux.html and https://www.servethehome.com/the-case-for-using-zfs-compression/ as I searched for solutions to my less-than-stellar zfs performance.

zfs set xattr=sa data/media
zfs set atime=off data/media
zfs set compression=lz4

All of your zfs filesystems are automatically mounted.

adam@normandy:~$ mount
...
data on /data type zfs (rw,xattr,noacl)
data/media on /data/media type zfs (rw,xattr,noacl)
data/vm on /data/vm type zfs (rw,xattr,noacl

You can use them just as you would any mounted filesystem. That’s it!

Home Server With Ubuntu

I finally picked up a used Dell PowerEdge R720 from the fine folks at ServerMoney to replace my current home server (a Frankenstein of workstation parts).

I thought I’d document my setup for anyone that might be interested and for my future self that wondered what exactly I did in the 1st place 😜

My server needs are quite diverse, so I’ll break this guide into separate posts for each one to keep things organized. (links will be active once each part is finished)

Happy serving!

Part I: Basic Ubuntu Server Install (SSH, KDE, & xrdp)
Part II: Ubuntu ZFS Setup
Part III: Ubuntu Virtualization Server with KVM
Part IV: pfSense on KVM
Part V: Plex on Ubuntu
Part VI: SMB & NFS

Upgrading to PostgreSQL 11 on Centos 7

Since my previous article Upgrading to PostgreSQL 10 on Centos 7 was so popular, I though I’d do a follow-up for anyone looking to upgrade a very simply configured PostgreSQL 10 server to PostgreSQL 11 on Centos 7.

First, and this goes without saying, backup your server!

Install the repo RPM for PosgresSQL 10
sudo yum install https://download.postgresql.org/pub/repos/yum/reporpms/EL-7-x86_64/pgdg-redhat-repo-latest.noarch.rpm
Install PosgreSQL 11
sudo yum install postgresql11-server
Extensions

If you’re using extensions like pg_crypto, you will also need the postgresql11-contrib package

sudo yum install postgresql11-contrib
Stop Postgresql 10 and Postgresql 11
sudo systemctl stop postgresql-10.service && sudo systemctl stop postgresql-11.service
Initialize the PostgreSQL 11 database
sudo su postgres
cd ~/
/usr/pgsql-11/bin/initdb -D /var/lib/pgsql/11/data/
Migrate your database from the 10.x version to 11.x
/usr/pgsql-11/bin/pg_upgrade --old-datadir /var/lib/pgsql/10/data/ --new-datadir /var/lib/pgsql/11/data/ --old-bindir /usr/pgsql-10/bin/ --new-bindir /usr/pgsql-11/bin/
Edit configuration files

Make any necessary changes to postgresql.conf . I’d recommend making the changes to the new version rather than copying over postgresql.conf from 10.

You can view your 10 configuration with:

nano /var/lib/pgsql/10/data/postgresql.conf

You can make your changes to the 11 configuration with:

nano /var/lib/pgsql/11/data/postgresql.conf

If you need to connect from other servers, make sure to change:

#listen_addresses = 'localhost'

to (apostrophes may not survive copy/paste, may want to hand enter)

listen_addresses = '*'

Now do the same with pg_hba.conf

View the old configuration

nano /var/lib/pgsql/10/data/pg_hba.conf

Edit the new configuration

nano /var/lib/pgsql/11/data/pg_hba.conf
Start the server
systemctl start postgresql-11.service
Analyze and optimize the new cluster
./analyze_new_cluster.sh
Enable the PostgreSQL 11 Service (to start automatically)
systemctl enable postgresql-11
Remove PostgreSQL 10 and its data (if so desired)
./delete_old_cluster.sh
exit
sudo yum remove postgresql10-server

That should do it!

Audio Fix for Mass Effect on Steam for Linux

I’m a few days late for N7 day, but I figure this information is useful nonetheless!

I’ve always done my Mass Effecting on XBox 360 or XBox One. But now that Steam has many Windows games working on Linux, I figured: what heck, let’s start over again there! (I’m not the only one who plays Mass Effect on loop right? Bueller? Bueller?)

Everything worked like magic right from the get-go, except audio. You’ll probably get sound from the corporate logos, but nothing when you play the game. If you try to turn off hardware audio in the settings, which is culprit, Mass Effect will dutifully turn it back on again.

So, by way of the Arch Linux forums, here’s the fix:

Open up a terminal window.

Make a copy of the existing config, just in case:

cp ~/.steam/steam/steamapps/common/Mass\ Effect/Engine/Config/BaseEngine.ini ~/Desktop/

Edit the configuration file:

gedit ~/.steam/steam/steamapps/common/Mass\ Effect/Engine/Config/BaseEngine.ini

Scroll down to the section with the heading: [ISACTAudio.ISACTAudioDevice]

Copy and paste these two lines right below the heading:

DeviceName=Generic Software
UseEffectsProcessing=False

And that’s it! Fire up Mass Effect, and you should get audio.

Getting to the (.Net) Core of It

Migrating a .Net 4.x Console Application to .Net Core

I finally got the server side of Winds of Paradise running in .Net Core! I thought I’d share how it did in, in hopes that it might help you do the same. As cool as Mono is, I’m totally psyched to have all my C# code running on Microsoft’s .Net under Centos Linux!

If you haven’t watched it yet, I highly recommend Microsoft’s .Net Core lesson at the Microsoft Virtual Academy:
https://mva.microsoft.com/en-US/training-courses/introduction-to-net-core-16764?l=DoVafl7yC_7606218965

1st thing I did was update Visual Studio 2015 Community to Update 3 and install all of the prerequisites. These can be found at: http://getdotnet.azurewebsites.net/target-dotnet-platforms.html

You can also find instructions there on how to install .Net Core onto wherever your code will be hosted.

Once everything is installed (block out a good hour for this), I opened my existing solution in Visual Studio.

To the solution, I then added another project and chose the type: Console Application (.Net Core)

dot-net-core-console

Next, open that project and use the NuGet Manager to install any packages that you are using in your .Net 4.x project. For me, this was npgsql and Newtonsoft.JSON.

Once the new project is created, copy over all of your .cs files from your original project to the new .Net Core version.

Hit build, and start working on replacing any .Net 4.x functionality that is not available in .Net Core.

What I did was make corrections in the .Net Core version and then replicate those changes in the .Net 4.5 version. This way, I could build and run the old version with minor changes to prove the changes worked, rather than changing *everything* and trying to debug the .Net Core version. This worked, since the .Net 4.x encompasses everything .Net Core does.

Once you’ve worked out all of the bugs, copy all of your code over to whatever box your running on .Net Core on (Windows, Linux, MacOS, etc.).
Open a shell, cd in the folder with project.json and run:

dotnet restore
dotnet run

That’s it! .Net will download necessary packages from nuget, compile, and run.

Here’s some of the classes I had to find workarounds for:

System.Net.HttpWebRequest, System.IO.Stream.GetReqestStream:
The .Net Core version of this object only has async methods for GetRequestStream and GetResponse. You’ll have to move to GetRequestStreamAsync and GetResponseAsync, which also means having your methods return Task instead of void
Also, the .Net Core version does not have HttpWebRequest.ContentLength when doing a POST. So far, simply removing it seems to work fine in both  .Net 4.5 and .Net Core.

NpgsqlDataAdapter:
I’m guessing this could probably be made to work with .Net Core fairly easily, but moving NpgsqlDataAdpater usage to NpgsqlDataReader for better database efficiency has been on my TODO list for quite a while anyway.

System.Configuration:
I found a great resource for setting up json configuration files at:
https://csharp.christiannagel.com/2016/08/02/netcoreconfiguration/
A couple notes:
1) I had previously been targeting .Net 4.5. I needed to target 4.5.1 in order to install Microsoft.Extensions.Configuration
2) If you create appsetttings.json in your .Net 4.5 project 1st, make sure to create appsettings.json in your new .Net Core project, DON’T COPY it from from the .Net 4.5.1 one. Creating it new sets it as a “Content File”

System.Timers.Timer:
This was a tricky one. System.Timers.Timer allows easily add/subtracting events from occurring on the timer with Elapsed += [function].  This allowed me to have a couple of static timers that any other object could add events to.
The only alternative in .Net Core is the System.Threading.Timer. This timer is much less sophisticated. It can only accept one function to run at each tick, and this function cannot be changed. My workaround was to implement a separate timer for each object that needed one. I’m hoping this does not increase resource consumption. Hopefully a better timer alternative will work its way int nuget, I didn’t see anything that looked promising at the moment.

Happy coding!

WordPress Auto Update Soup-to-Nuts

This took a couple days of Binging and hacking, but I finally got WordPress to auto-update on Centos 7 with SSL and without disabling SELinux.

Update 1: I should note, this is for self-hosted WordPress users.

(Anything in brackets [] is up to you to choose)

WordPress 4.4 requires FTP access to the server in order to update its self.

vsFTPd with SSL

To keep things secure, I’ve setup vsftpd with chroots (to prevent ftp accounts from going outside of where they should be) and SSL.

Install vsfptd

sudo yum install vsftpd

Edit the configuration file

sudo nano /etc/vsftpd/vsftpd.conf

The following options should already be in your config file and can just be changed:

anonymous_enable=NO
local_enable=YES
write_enable=YES
chroot_local_user=YES

The rest should be added to the bottom of the config file.
I’m assuming you already have an SSL cert you are using for your website. You can use this cert for vsftpd as well.

# Keep non-chroot listed users jailed
allow_writeable_chroot=YES

#SSL
ssl_enable=YES
allow_anon_ssl=NO
force_local_data_ssl=YES
force_local_logins_ssl=YES
ssl_tlsv1=YES
ssl_sslv2=NO
ssl_sslv3=NO
rsa_cert_file=/etc/pki/tls/certs/[your ssl cert].crt
rsa_private_key_file=/etc/pki/tls/private/[your ssl cert key].key

Now you can enable and start the FTP server

sudo systemctl enable vsftp
sudo systemctl start vsftp

Next, create a user that will be used for FTP.
It’s important to set the home directory with the “-d” option to where your website files are. I’m assuming the default /var/www/html.

sudo adduser -d /var/www/html [ftp-user]

Set a password for the user. Make sure to choose something secure!

sudo passwd [ftp-user]

Add the user to the apache group, so that it will have write access to /var/www/html/*

sudo gpasswd -a [ftp-user] apache

Make sure that apache has read/write to the WordPress files

sudo chown apache:apache /var/www/html/*
sudo chmod -R g+w /var/www/html/*

SELinux

To the best of my knowledge, these are the SELinux commands necessary for both the vsftpd as well as for Apache to FTP into the server and update itself.

SELinux booleans to enable the functionality we need

setsebool -P ftp_home_dir=on
setsebool -P ftpd_full_access=on
setsebool -P httpd_can_network_connect=on
setsebool -P httpd_can_connect_ftp=on

SELinux needs to be told that Apache has permission to write the files in /var/www/html and its subfolders

sudo chcon -R -v -t httpd_sys_rw_content_t /var/www/html

Let’s test the FTP server to make sure you can connect

First, install the lftp client

sudo yum install lftp

Connect to the FTP server

lftp -d -u [ftp-user] -e 'set ftp:ssl-force true' 127.0.0.1

Run

ls

and make sure you get a directory listing. If not, you’ll need to use the debug data printed to troubleshoot further (I sure did, I hope you won’t).

Assuming that works, the last step is to set edit wp-config.php with the FTP server settings

sudo nano /var/www/wp-config.php

Under the database settings, add a section:

/*** FTP login settings ***/
define("FTP_HOST", "127.0.0.1");
define("FTP_USER", "[ftp-user]");
define("FTP_PASS", "[ftp-user-password]");

It may not be necessary, but I like to restart Apache just to be sure

sudo systemctl restart httpd

Finally, log into WordPress and try to update something simple, like a theme or plugin. It should work!

Some Fun With NFS and Windows

I have some Linux servers that I’d like to talk to my Windows Server 2012R2 file server.

Since I’d like daemons, rather than users, to be able to communicate with the server, I thought this would be a good candidate for NFS.

Linux Side (1st round)

(I’m using Centos, but the general concept will apply to Fedora, Ubuntu, etc.)

Install the daemons that will access the file server. Most of these will create their own users.

Create any additional users you would like to be able to access the file server. You can always add more later.

To save some complexity (and not assume you pay for Active Directory), I’m not going to have my file server look up Linux IDs via Active Directory. Instead, I’m going to use flat passwd and group files, just like Linux.

Copy (via SSH, USB, copy/paste, whatever) the passwd and group files from /etc/ over to your Windows server.

You can delete all of the entries for users/groups that will not be accessing the share.

Window Side

Copy the the passwd and group files to:
%SystemRoot%\system32\drivers\etc\

Create users (and groups) on your server with the same user name / group name as you created on your Linux server.

UPDATE: Make sure you set the Windows users to never have their passwords expire if they are service accounts. If they do, the users will lose access to the shares via NFS when the password expires.

The passwd and group files serve as a map between the user/group IDs in Linux and the user/group names in Windows.

Install Server for NFS on the Windows server.

Server Manager->Manage->Add Roles and Features

server-for-nfsNext->Next->etc. until installed.

Browse to the folder on your file server you are looking to share.

Right click on it and choose Properties

Go to the NFS Sharing tab

Click the “Manage NFS Sharing” button

nfs-sharing-advanced

Check the “Share this folder” check box.

The only other change I make here is to uncheck the “Enable unmapped user access” option so that only users in the passwd file we copied over will have access to the server.

Next, click on the Permissions button at the bottom

nfs-share-permissions

I like to set “All Machines” to be no access, that way only the servers I specify will be able to mount the share.

Click the “Add…” button.

add-nfs-clientIn the “Add Names:” box, enter the IP address of your Linux server.

Make sure Type of Access is set to the type you are looking for.

I prefer to leave “Allow root access” unchecked for a bit more security.

Press OK, OK, Close

If everything worked, the folder icon should now look like this:

nfs-share-icon

Using the security tab, assign NTFS permission to the folder for the users you would like to be able to read/write to that folder, just as you would if it were an SMB share.

UPDATE for TVHeadEnd:
Many Linux daemons will use the same id for both the user and group.
Some, like tvheadend, will use different group and user IDs.
For these, it’s critical to setup a group with the same name (and with the user as a member) in Windows and assign permissions to the group as well the user.
Otherwise, you will get permission denied errors.

Linux Side (2nd Round)

Install the NFS client and enable (make start on boot) and start the services.

sudo yum -y install nfs-utils

sudo systemctl enable rpcbind
sudo systemctl enable nfs-server
sudo systemctl enable nfs-lock
sudo systemctl enable nfs-idmap

sudo systemctl start rpcbind
sudo systemctl start nfs-server
sudo systemctl start nfs-lock
sudo systemctl start nfs-idmap

Create a folder that will be used as the mount point for the file server, aka: Where do I go to get to the files on the file server.

I was really hoping to find a definitive “this is where to mount nfs shares” article, but some Binging around came up with nothing.

I will therefore advise you create a folder under /mnt, as that feels right to me.

sudo mkdir -p /mnt/[server name]/[share name]

It’s finally time to give the share a test.

Run:

sudo mount -t nfs [server name or ip]:/[nfs share name] /mnt/[server name]/[share name]

If you receive an access denied error, you may need to specify NFS v3

sudo mount -t nfs  -o nfsvers=3 [server name or ip]:/[nfs share name] /mnt/[server name]/[share name]

Make sure you are logged in as a user with permission to that folder and cd into it:

cd /mnt/[server name]/[share name]

You should now be able to create files and folders! (which will of course be visible on the file server as well)

The final step is to have the server automatically mount the share on boot.

sudo nano /etc/fstab

Add a line similar to:

[server dns name or ip]:/[share name]    /mnt/[file server name]/[share]  nfs     defaults        0 0

If you needed the nfsvers=3 option earlier, instead use:

[server dns name or ip]:/[share name]    /mnt/[file server name]/[share]  nfs     nfsvers=3        0 0

Give the server a reboot to test automatic mounting

sudo shutdown -r now

When you reboot, the share should be mounted and all is good in the world!

PS: If you are using this for transmission-daemon (which I’m assuming you’re using for legitimate purposes), make sure you edit your settings.json file and set umask=0, otherwise transmission will create folders that it cannot create files in.

NuGet Is Just Better

I was working on getting Postgresql, Visual Studio Remote Debugger, and PHP running on Server 2012 R2 so that I can up my debugging-fu, rather than just relying on Console.WriteLine.

Ran into some DLL hell trying to get npgsql working. I saw NuGet mentioned while Binging for solutions, so I figured I’d give it a try. Where has this been all my coding life?

A few clicks, and remote debugging is up and running. I even copied the compiled files over to my production box, and everything is working fine in Centos on Mono as well. What a great way to spend a Friday morning off!