Quick Overview of LoRaWAN Technology Protocol

Quick Overview of LoRaWAN Technology Protocol

BY:Author Arjun Nagarajan LoRaWAN is a remote wide area network protocol developed by the LoRa Alliance that connects “things” wirelessly to the Internet in regional, national, or global networks. This technology meets critical Internet of Things (IoT) requirements such as bi-directional communication, end-to-end security, mobility, and localization services. It defines the communication protocol and system … Read more

Differences and Connections Between LPWAN, LoRa, and LoRaWAN

Differences and Connections Between LPWAN, LoRa, and LoRaWAN

Recently, keywords like LPWAN have sparked a craze in the IoT field. With LPWAN, LoRa, and LoRaWAN having similar names, many people are confused about these three concepts. So today, let’s explore the differences and connections between them. LPWAN, also known as LPN, stands for Low Power Wide Area Network or Low Power Network, referring … Read more

Selected Technical Articles on CANopen Bus

Selected Technical Articles on CANopen Bus

Author’s Note To make it easier for everyone to find technical articles related to the CANopen bus, we have specially compiled a collection of selected technical articles on CANopen published on the ‘Industrial Communication’ public account. We welcome everyone to check them out, and if you have any questions or suggestions, please feel free to … Read more

Introduction to Zephyr RTOS for Robotics Development

Introduction to Zephyr RTOS for Robotics Development

Robots are not isolated systems; they are a set of distributed sensors and actuators, often running on resource-constrained small embedded devices (microcontrollers) that communicate via wired or wireless connections. However, mainstream robotic operating systems like ROS/ROS2 are too large and heavy, making them unsuitable for robotic scenarios based on microcontrollers with hard real-time requirements. Zephyr … Read more

Understanding LoRA from a Gradient Perspective

Understanding LoRA from a Gradient Perspective

©PaperWeekly Original · Author | Su Jianlin Affiliation | Zhuiyi Technology Research Area | NLP, Neural Networks With the popularity of ChatGPT and its alternatives, various parameter-efficient fine-tuning methods have also gained traction, among which one of the most popular is the focus of this article, LoRA, originating from the paper “LoRA: Low-Rank Adaptation of … Read more

Setting and Explanation of Key LoRa Parameters (Spreading Factor, Coding Rate, Bandwidth)

Setting and Explanation of Key LoRa Parameters (Spreading Factor, Coding Rate, Bandwidth)

LoRa Study: Setting and Explanation of Key LoRa Parameters (Spreading Factor, Coding Rate, Bandwidth) 1. Spreading Factor (SF) 2. Coding Rate (CR) 3. Signal Bandwidth (BW) 4. Relationship between LoRa Signal Bandwidth BW, Symbol Rate Rs, and Data Rate DR 5. Setting of LoRa Signal Bandwidth, Spreading Factor, and Coding Rate For specific applications, developers … Read more

Cost-Effective Fine-Tuning with LoRA

Cost-Effective Fine-Tuning with LoRA

Selected from Sebastian Raschka’s blog Translated by Machine Heart Editor: Jiaqi This is the experience derived from hundreds of experiments by the author Sebastian Raschka, worth reading. Increasing the amount of data and the number of model parameters is a widely recognized direct method to improve neural network performance. Currently, mainstream large models have parameter … Read more

Latest Advances in LoRA: A Comprehensive Review

Latest Advances in LoRA: A Comprehensive Review

Abstract——The rapid development of foundational models——large-scale neural networks trained on diverse and extensive datasets——has revolutionized artificial intelligence, driving unprecedented advancements in fields such as natural language processing, computer vision, and scientific discovery. However, the enormous parameter counts of these models, often reaching billions or even trillions, pose significant challenges in adapting them to specific downstream … Read more

Comprehensive Analysis of LoRA, QLoRA, RLHF, PPO, DPO, and Flash Attention

Comprehensive Analysis of LoRA, QLoRA, RLHF, PPO, DPO, and Flash Attention

With the rapid development of large models, there has been significant technological iteration and updates in just a year, from LoRA, QLoRA, AdaLoRa, ZeroQuant, Flash Attention, KTO, distillation techniques to model incremental learning, data processing, and understanding new open-source models, almost every day brings new developments. As algorithm engineers, do you feel like your learning … Read more

ReLoRA: Efficient Large Model Training Through Low-Rank Updates

ReLoRA: Efficient Large Model Training Through Low-Rank Updates

This article focuses on reducing the training costs of large Transformer language models. The author introduces a low-rank update-based method called ReLoRA. A core principle in the development of deep learning over the past decade has been to “stack more layers,” and the author aims to explore whether stacking can similarly enhance training efficiency for … Read more