I thought that the best option to run Frigate is to run bare metal and skip virtualization and system containers. However now situation changed a little bit as I was able to fire up Frigate on LXC container on Proxmox with little help of AMD ROCm hardware assisted video decryption. And yes, detection crashes on ONNX and need to run on CPU instead… but video decryption works well. And even more, detection on 16 x AMD Ryzen 7 255 w/ Radeon 780M Graphics (1 Socket) works very well for almost 20 video streams (mixed H264 and H265). You can switch
We are using Huawei PV installation on top of our house roof. It has web panel and application available to preview every detail about its settings and working conditions. However I would like to integrate PV power production into my Fibaro HC3. So: First things first: create Northbound API user in web panel. Select all privileges for data acquisition. Then grab auth token: In response you will get xsfr-token in headers data. Now using this auth token you need to get installations list (stations): Now using your station ID get list of your devices: To retrieve sort of real time
Quick overview of LLM MLX LORA training parameters. weight_decay A regularization technique that adds a small penalty to the weights during training to prevent them from growing too large, helping to reduce overfitting. Often implemented as L2 regularization.examples: 0.00001 – 0.01 grad_clip Short for gradient clipping — a method that limits (clips) the size of gradients during backpropagation to prevent exploding gradients and stabilize training.examples: 0.1 – 1.0 rank Refers to the dimensionality or the number of independent directions in a matrix or tensor. In low-rank models, it controls how much the model compresses or approximates the original data.examples: 4,
Full fine-tuning of mlx-community/Qwen2.5-3B-Instruct-bf16 Recently I posted article on how to train LORA MLX LLM here. Then I asked myself how can I export or convert such MLX model into HF or GGUF format. Even that MLX has such option to export MLX into GGUF most of the time it is not supported by models I have been using. From what I recall even if it does support Qwen it is not version 3 but version 2 and quality suffers by such conversion. Do not know why exactly it works like that. So I decided to give a try with
I have done over 500 training sessions using Qwen2.5, Qwen3, Gemma and plenty other LLM publicly available to inject domain specific knowledge into the model’s low rank adapters (LORA). However, instead of giving you tons of unimportant facts I will just stick to the most important things. Starting with the fact that I have used MLX on my Mac Studio M2 Ultra as well as on MacBook Pro M1 Pro. Both fit well to this task in terms of BF16 speed as well as unified memory capacity and speed (up to 800GB/s). Memory speed is the most important factor comparing
During sync between two Proxmox Backup Server instances I got “decryption failed or bad record mac” error message. So I decided to go for upgrading source PBS to match its version with target PBS. PBS upgrade To upgrade PBS: Get rid of bookworm sources. And then: However it did not help. Further debugging 3 things involved in this investigation. This one: Next disabling Suricata IDS/ISP. Did not help. Finally I changed pfSense settings for System – Advanced – Firewall & NAT from Aggressive to Conservative: It worked.
Add “alias” to /etc/network/interfaces: And restart network interfaces.
YOLOX is an anchor-free version of YOLO, with a simpler design but better performance! It aims to bridge the gap between research and industrial communities. For more details, please refer to our report on Arxiv. https://yolox.readthedocs.io/en/latest/demo/onnx_readme.html https://github.com/Megvii-BaseDetection/YOLOX/tree/main/demo/ONNXRuntime To configure this in Frigate:
You can use either –query-gpu option: Example output: Or dmon: Example output: With little explanation: