If you want to read the Chinese version, move down to the Chinese Part, 本文包含中文版本,请移到后面的中文版本。
I run a self-hosted environment at home, including an Immich photo server and an SSH server, both happily residing on my local network (LAN). To access these services remotely, I configured port forwarding on my OpenWrt router. Initially, everything seemed to function flawlessly – I could access my services from anywhere.
However, a peculiar issue soon surfaced: my SSH server began refusing connections. A dive into the auth.log and Nginx access logs revealed something unexpected. All incoming internet traffic, whether to SSH or Immich, appeared to originate from 192.168.x.x – the LAN IP address of my OpenWrt router itself! This became a critical problem because persistent SSH scanning attempts from the internet were, understandably, being flagged by Fail2Ban. The consequence? Fail2Ban banned the “offending” IP, which was my router, effectively severing access to all my forwarded services for everyone, including myself.
My troubleshooting journey involved extensive Googling and even consulting AI assistants, but these efforts proved fruitless. One AI suggestion was to install Nginx on the OpenWrt router itself to act as a reverse proxy. While a reverse proxy can preserve client IPs (using headers like X-Forwarded-For), installing a full-fledged Nginx instance on a resource-constrained router solely for this felt like overkill and not the most direct solution.
The breakthrough came from an unexpected source: an iStoreOS issue tracker. (For the uninitiated, iStoreOS is a popular OpenWrt fork). While the issue thread didn’t contain a step-by-step solution, the original poster mentioned resolving a similar problem by tweaking NAT rules to prevent source address translation. Given the shared lineage between OpenWrt and iStoreOS, I suspected this approach might be applicable.
Armed with this clue, I navigated to my OpenWrt LuCI interface: Network -> Firewall -> NAT Rules.
Here, I added a new rule with the following key parameters:
Action: ACCEPT
Crucially: I enabled an option typically labeled something like “Disable Source NAT” or “Disable Address Rewrite” (the exact wording might vary slightly depending on your OpenWrt version/theme, but the intent is to prevent the router from rewriting the source IP address of the incoming packets).
(Self-correction: More accurately, the typical OpenWrt port forward is a DNAT rule. The issue is that an overly broad SNAT/Masquerade rule might be applying to this forwarded traffic. The key is to ensure that for these specific port-forwarded connections, SNAT is not performed, allowing the original source IP to pass through to the internal server. The “Disable Source NAT” option is often available directly within the port forward (DNAT) rule definition itself, or a specific “NO SNAT” rule needs to be crafted).
The result? Success!
Immediately after applying and saving this NAT rule, my server logs began showing the actual public IP addresses of external visitors. Most importantly, Fail2Ban could now accurately identify and ban malicious external IPs without inadvertently blacklisting my entire home network via the router’s IP.
Why this works (The Geeky Bit):
Standard port forwarding involves Destination NAT (DNAT), where the router changes the destination IP of an incoming packet (from its public WAN IP to your internal server’s LAN IP). However, many default OpenWrt setups also apply Source NAT (SNAT) or Masquerading for traffic passing through it, especially from LAN to WAN. In some configurations, this SNAT can also incorrectly get applied to WAN-to-LAN forwarded traffic, replacing the original client’s IP with the router’s LAN IP before the packet reaches your internal server.
The “Disable Source NAT” or “Disable Address Rewrite” option for the relevant traffic flow instructs OpenWrt’s netfilter firewall to not perform this source IP alteration for packets destined for your forwarded ports. This allows your internal services to see the true originating IP, essential for accurate logging, geolocation, and security tools like Fail2Ban.
If you’re facing a similar issue where your internal services behind an OpenWrt router only see the router’s IP for external connections, diving into your NAT rules and looking for an option to prevent source address translation on your port forwards might just be the elegant solution you need.
在应用并保存这条 NAT 规则后,我的服务器日志立即开始显示外部访问者真实的公网 IP 地址。最重要的是,Fail2Ban 现在可以准确识别并封禁恶意的外部 IP,而不会通过路由器的 IP 无意中将我的整个家庭网络列入黑名单。
为何有效(技术解读):
标准的端口转发涉及目标网络地址转换 (DNAT),路由器会将传入数据包的目标 IP(从其公网 WAN IP)更改为您内部服务器的 LAN IP。然而,许多默认的 OpenWrt 设置也会对通过它的流量(尤其是从 LAN 到 WAN 的流量)应用源网络地址转换 (SNAT) 或 IP 伪装 (Masquerading)。在某些配置下,这种 SNAT 也可能错误地应用于 WAN 到 LAN 的转发流量,在数据包到达您的内部服务器之前,将原始客户端的 IP 替换为路由器的 LAN IP。
This is the first article about running an AI model on Mac Studio, and I will continue to migrate the environment from CUDA / Nvidia GPU to Mac MPS.
Why did I choose Mac Studio?
I chose Mac Studio because it is less expensive. It has 192GB of memory that can be used as a GPU. This means that it is possible to migrate the program from Nvidia GPU and save some money for personal use.
What is Fuyu-8B?
We are releasing Fuyu-8B, a small version of the multimodal model that powers our product. The model is available on HuggingFace. We think Fuyu-8B is exciting because:
It has a much simpler architecture and training procedure than other multimodal models, making it easier to understand, scale, and deploy.
It is designed from the ground up for digital agents, so it can support arbitrary image resolutions, answer questions about graphs and diagrams, answer UI-based questions, and perform fine-grained localization on screen images.
It is fast – we can get responses for large images in less than 100 milliseconds.
Despite being optimized for our use case, it performs well at standard image understanding benchmarks such as visual question-answering and natural-image-captioning.
Ok, let’s do it now.
Prepare the environment:
You need Python 3.6+ and virtualenv installed. Conda or venv also work.
cssCopy code
virtualenv -p python3 py3
Download the HuggingFace transformers and clone the transformer from GitHub.
bashCopy code
git clone https://github.com/huggingface/transformers.git cd transformers pip install .
You are almost done here; now we can start the samples.
Sample 1:
from transformers import FuyuProcessor, FuyuForCausalLM
from PIL import Image
# load model and processor
model_id = "."
processor = FuyuProcessor.from_pretrained(model_id)
model = FuyuForCausalLM.from_pretrained(model_id, device_map="mps", torch_dtype=torch.float16)
# prepare inputs for the model
text_prompt = "Generate a coco-style caption.\n"
image_path = "bus.png" # https://huggingface.co/adept-hf-collab/fuyu-8b/blob/main/bus.png
image = Image.open(image_path)
inputs = processor(text=text_prompt, images=image, return_tensors="pt")
for k, v in inputs.items():
inputs[k] = v.to("mps")
# autoregressively generate text
generation_output = model.generate(**inputs, max_new_tokens=7)
generation_text = processor.batch_decode(generation_output[:, -7:], skip_special_tokens=True)
print(generation_text)
Sample 2:
import os
from transformers import FuyuProcessor, FuyuForCausalLM
from PIL import Image
import torch
def list_files_in_directory(path, extensions=[".png", ".jpeg", ".jpg", ".JPG", ".PNG", ".JPEG"]):
files = [f for f in os.listdir(path) if os.path.isfile(os.path.join(path, f)) and any(f.endswith(ext) for ext in extensions)]
return files
def main():
# load model and processor
model_id = "." #adept/fuyu-8b"
processor = FuyuProcessor.from_pretrained(model_id)
model = FuyuForCausalLM.from_pretrained(model_id, device_map="mps", torch_dtype=torch.float16) # To solve OOM, float16 enables operation with only 24GB of VRAM. Alternatively float16 can be replaced with bfloat16 with differences in loading time and inference time.
# Load last image path or ask user
try:
with open("last_path.txt", "r") as f:
last_path = f.read().strip()
user_input = input(f"Do you want to use the last path '{last_path}'? (yes/no, default yes): ")
if not user_input or user_input.lower() != 'no':
last_path = last_path
else:
raise ValueError("User chose to input a new path.")
except:
last_path = input("Please provide the image directory path: ")
with open("last_path.txt", "w") as f:
f.write(last_path)
while True:
# List the first 10 images in the directory
images = list_files_in_directory(last_path)[:10]
for idx, image in enumerate(images, start=1):
print(f"{idx}. {image}")
# Allow the user to select an image
image_choice = input(f"Choose an image (1-{len(images)}) or enter its name: ")
try:
idx = int(image_choice)
image_path = os.path.join(last_path, images[idx-1])
except ValueError:
image_path = os.path.join(last_path, image_choice)
try:
image = Image.open(image_path)
except:
print("Cannot open the image. Please check the path and try again.")
continue
questions = [
"Generate a coco-style caption.",
"What color is the object?",
"Describe the scene.",
"Describe the facial expression of the character.",
"Tell me about the story from the image.",
"Enter your own question"
]
# Asking the user to select a question from list, or select to input one
for idx, q in enumerate(questions, start=1):
print(f"{idx}. {q}")
q_choice = int(input("Choose a question or enter your own: "))
if q_choice <= 5:
text_prompt = questions[q_choice-1] + '\n'
else:
text_prompt = input("Please enter your question: ") + '\n'
while True: # To enable the user to ask further question about an image
inputs = processor(text=text_prompt, images=image, return_tensors="pt")
for k, v in inputs.items():
inputs[k] = v.to("mps")
# To eliminate attention_mask warning
inputs["attention_mask"] = torch.ones(inputs["input_ids"].shape, device="mps")
generation_output = model.generate(**inputs, max_new_tokens=50, pad_token_id=model.config.eos_token_id)
generation_text = processor.batch_decode(generation_output[:, -50:], skip_special_tokens=True)
print("Answer:", generation_text[0])
text_prompt = input("Ask another question about the same image or type '/exit' to exit: ") + '\n'
if text_prompt.strip() == '/exit':
break
#if name == "main":
main()
yes, It is Chinese. But not the Chinese fuyu-7b knows. It is not “食” (eating) , but “我不想洗碗”( i don’t want to wash the dishes). Fuyu-7b is lying. lol.
This tool allows you to redirect any TCP connection to SOCKS or HTTPS proxy using your firewall, so redirection may be system-wide or network-wide.
When is redsocks useful?
you want to route part of TCP traffic via OpenSSH DynamicForward Socks5 port using firewall policies. That was original redsocks development goal;
you use DVB ISP and this ISP provides internet connectivity with some special daemon that may be also called “Internet accelerator” and the accelerator acts as a proxy and has no “transparent proxy” feature and you need it. Globax was an example of alike accelerator, but Globax 5 has transparent proxy feature. That was the second redsocks` development goal;
you have to pass traffic through proxy due to corporate network limitation. That was never a goal for redsocks, but users have reported success with some proxy configurations.
Copied>>> from diffusers import UNet2DModel >>> model = UNet2DModel( … sample_size=config.image_size, # the target image resolution … in_channels=3, # the number of input channels, 3 for RGB images … out_channels=3, # the number of output channels … layers_per_block=2, # how many ResNet layers to use per UNet block … block_out_channels=(128, 128, 256, 256, 512, 512), # the number of output channels for each UNet block … down_block_types=( … “DownBlock2D”, # a regular ResNet downsampling block … “DownBlock2D”, … “DownBlock2D”, … “DownBlock2D”, … “AttnDownBlock2D”, # a ResNet downsampling block with spatial self-attention … “DownBlock2D”, … ), … up_block_types=( … “UpBlock2D”, # a regular ResNet upsampling block … “AttnUpBlock2D”, # a ResNet upsampling block with spatial self-attention … “UpBlock2D”, … “UpBlock2D”, … “UpBlock2D”, … “UpBlock2D”, … ), … )
Copied>>> from diffusers import DDPMPipeline >>> from diffusers.utils import make_image_grid >>> import math >>> import os >>> def evaluate(config, epoch, pipeline): … # Sample some images from random noise (this is the backward diffusion process). … # The default pipeline output type is `List[PIL.Image]` … images = pipeline( … batch_size=config.eval_batch_size, … generator=torch.manual_seed(config.seed), … ).images … # Make a grid out of the images … image_grid = make_image_grid(images, rows=4, cols=4) … # Save the images … test_dir = os.path.join(config.output_dir, “samples”) … os.makedirs(test_dir, exist_ok=True) … image_grid.save(f”{test_dir}/{epoch:04d}.png”)
1. 当nonce太小(小于当前的nonce值),交易会被直接拒绝,Transactions with too low a nonce get immediately rejected;
2. 当nonce太大,大于当前nonce,交易会一直处于队列之中,Transactions with too high a nonce get placed in the transaction pool queue;
3.当发送一个比较大的nonce值,然后补齐开始nonce到那个值之间的nonce,那么交易依旧可以被执行,If transactions with nonces that fill the gap between the last valid nonce and the too high nonce are sent and the nonce sequence is complete, all the transactions in the sequence will get processed and mined.
4. 交易队列只保存最多64个从同一个账户发出的交易,The transaction pool queue will only hold a maximum of 64 transactions with the same From:address with nonces out of sequence. 也就是说,如果要批量转账,同一节点不要发出超过64笔交易。
5.当某节点queue中还有交易,但此时停止geth客户端,queue中的交易会被清除掉,When the geth instances are shut down and restarted, transactions in the transaction pool queue disappear.
I planed make an self-desgin photo or movie player base on Raspberry. Also I can use it as photo frame. If I need improve the performance of the PI, I need write it with Python, I think.
Part 1 – Build the Foundation
In this part, we will focus on preparing Raspbian Lite.
1. Download the latest Raspbian Lite image.
2. Format the SD / microSD card with Raspbian Lite (Plenty of guides out there on how to do this. For macOS, Linux, and Windows users, Etcher is an easy to use application that can help you do this.)
3. Insert the SD / microSD card into the Pi.
4. Connect the Pi to the Internet using an Ethernet cable. If you want to use Wi-Fi instead, you will have to read on how to configure your wireless receiver using the command line after your Pi has finished booting.
5. Connect your TV / Monitor and keyboard. (Mouse is optional at this time.) Turn on the Pi. The Pi should boot up successfully and a prompt to log in will appear.
6. Log into Raspbian. The username is pi and the password is raspberry.
now, you can write you java program now. For example, I wrote a test program with a button in the center of screen. once I click the button, the window will change to the full size of the screen.
public class FullScreenTest {
public static void main(String[] args) {
final JFrame f = new JFrame(“FullScreenTest”);
final JButton btn = new JButton(“FullScreen”);
btn.addActionListener(new ActionListener() {
public void actionPerformed(ActionEvent e) {
if (btn.getText().equals(“FullScreen”)) {
f.dispose();
f.setUndecorated(true);
f.getGraphicsConfiguration().getDevice().setFullScreenWindow(f);
f.setVisible(true);
btn.setText(“NormalMode”);
} else {
f.dispose();
f.setUndecorated(false);
f.getGraphicsConfiguration().getDevice().setFullScreenWindow(null);
f.setVisible(true);
btn.setText(“FullScreen”);
}
}
});
In order to have a command or program run when the Pi boots, you can add commands to the rc.local file. This is especially useful if you want to be able to plug your Pi in to power headless, and have it run a program without configuration or a manual start.
EDITING RC.LOCAL
On your Pi, edit the file /etc/rc.local using the editor of your choice. You must edit with root, for example:
sudo nano /etc/rc.local
Add commands below the comment, but leave the line exit 0 at the end, then save the file and exit.