BOINC


BOINCBOINC is a the Berkeley Open Infrastructure for Network Computing tool to allow you to volunteer some of your CPU to community science projects. The tool is maintained by UC Berkeley and was originally developed for the SETI@Home Project (RIP), but there are about 30 different projects that you can help with. I currently split time with the following projects:

Below is how I've deployed BOINC into my Kubernetes cluster.

Product: BOINC
Install Type: Manifest
Container Image: BOINC Client

Installation Details

I have not as of yet created a Helm chart for this, so I have configured manifest files for this deployment.

00-utility-namespace.yaml

apiVersion: v1
kind: Namespace
metadata:
  name: utility
  labels:
    name: utility

This is optional, but I create the namespace in my builds in case I have to recover from scratch. If it already exists, it will not have any real negative effects

01-boinc-config.yaml

apiVersion: v1
kind: ConfigMap
metadata:
  name: boinc-configs
  namespace: utility
data:
  BOINC_GUI_RPC_PASSWORD: boinc
  BOINC_CMD_LINE_OPTIONS: "--allow_remote_gui_rpc"
  TZ: "America/NewYork"
  PUID: "1000"
  PGID: "1000"

02-pdf-storage.yaml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  namespace: utility
  name: boinc-config
spec:
  accessModes:
    - ReadWriteMany
  storageClassName: longhorn
  resources:
    requests:
      storage: 20Gi

I am using Longhorn as as distributed block storage storageClass in my cluster. You may need to adjust this for your particular use case, or consider using emptyDir if you do not wish to have any data persistence, but you will lose any current work your BOINC client is working on if the container restarts

03-boinc-deploy.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: boinc
  namespace: utility
  labels:
    app: boinc
    app.kubernetes.io/name: boinc
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: boinc
  template:
    metadata:
      labels:
        app: boinc
        app.kubernetes.io/name: boinc
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: node-type
                operator: In
                values:
                  - physical
      volumes:
      - name: boinc-config
        persistentVolumeClaim:
           claimName: boinc-config
      containers:
        - name: boinc
          image: linuxserver/boinc:latest
          imagePullPolicy: Always
          volumeMounts:
          - name: boinc-config
            mountPath: /config
          resources:
            limits:
              cpu: '4'
              memory: 8Gi
            requests:
              cpu: '1'
              memory: 1Gi
          envFrom:
            - configMapRef:
                name: boinc-configs

Notes

I do two things different for the BOINC install since it can be very resource intensive. Since I have a mix of physical and virtual machines in my cluster, I have created labels (node-type) for each node to signify what type of server they are (physical or virtual). My physical servers have less cores, but each core is more powerful so I would like to have BOINC run on ONLY these servers. Hence the lines below:

     affinity:
       nodeAffinity:
         requiredDuringSchedulingIgnoredDuringExecution:
           nodeSelectorTerms:
           - matchExpressions:
             - key: node-type
               operator: In
               values:
                 - physical

You can omit these lines if you do not wish to have a similar configuration.

Another thing I do is actually define resources for the deployment. Generally I do not schedule any resources as I would rather have the cluster dynamically assign these. However BOINC can be very resource hungry and I do not want it taking too much so I define minimum and maximum resources with these lines:

         resources:
           limits:
             cpu: '4'
             memory: 8Gi
           requests:
             cpu: '1'
             memory: 1Gi

You can of course omit these lines to let BOINC take as much as it can, but please consider what you are comfortable allocating to BOINC and modify these lines accordingly for your environment.

04-boinc-service.yaml

kind: Service
apiVersion: v1
metadata:
  name: boinc-service
  namespace: utility
spec:
  selector:
    app: boinc
  ports:
  - protocol: TCP
    name: boinc
    port: 31416
    targetPort: 31416
  - protocol: TCP
    name: web1
    port: 8181
    targetPort: 8181
  - protocol: TCP
    name: web2
    port: 8080
    targetPort: 8080
  type: LoadBalancer

This image does provide a web front end with KASM, but I also define an external (to the cluster) IP with a LoadBalancer (via MetalLB) so any BOINC client on the internal network can remotely connect to this client via the BOINC client. If you don't need or want this functionality, it can be changed to ClusterIP and define an Ingress to access the web front end.

Now, we can deploy this all together with:

kubectl apply -f 00-utility-namespace.yaml \
              -f 01-boinc-config.yaml \
              -f 02-boinc-storage.yaml \
              -f 03-boinc-deploy.yaml \
              -f 04-boinc-service.yaml