- 0 Posts
- 38 Comments
arcayne@lemmy.todayto Linux Gaming@lemmy.world•Bazzite founder might shutdown whole project if Fedora drops support for 32 bit packagesEnglish7·18 天前CachyOS is great, much better than Bazzite or Nobara IMO. Been daily driving it on my gaming rig with an NVIDIA GPU for ~9mo. Great performamce, no complaints.
Oooh, ouch looks really neat! May actually cause me to retire my
extract
function. It suddenly feels a little incomplete by comparison, lol.# Extract any archive extract() { if [ -f "$1" ]; then case $1 in *.tar.bz2) tar xjf "$1" ;; *.tar.gz) tar xzf "$1" ;; *.bz2) bunzip2 "$1" ;; *.rar) unrar x "$1" ;; *.gz) gunzip "$1" ;; *.tar) tar xf "$1" ;; *.tbz2) tar xjf "$1" ;; *.tgz) tar xzf "$1" ;; *.zip) unzip "$1" ;; *.Z) uncompress "$1" ;; *.7z) 7z x "$1" ;; *) echo "'$1' cannot be extracted via extract()" ;; esac else echo "'$1' is not a valid file" fi }
Well, my full
functions.sh
won’t fit in a comment, so here’s 2 of my more unique functions that makes life a little easier when contributing to busy OSS projects:# Git fork sync functions # Assumes standard convention: origin = your fork, upstream = original repo ## Sync fork with upstream before starting work gss() { # Safety checks if ! git rev-parse --git-dir >/dev/null 2>&1; then echo "❌ Not in a git repository" return 1 fi # Check if we're in a git operation state local git_dir=$(git rev-parse --git-dir) if [[ -f "$git_dir/rebase-merge/interactive" ]] || [[ -d "$git_dir/rebase-apply" ]] || [[ -f "$git_dir/MERGE_HEAD" ]]; then echo "❌ Git operation in progress. Complete or abort current rebase/merge first:" echo " git rebase --continue (after resolving conflicts)" echo " git rebase --abort (to cancel rebase)" echo " git merge --abort (to cancel merge)" return 1 fi # Check for uncommitted changes if ! git diff-index --quiet HEAD -- 2>/dev/null; then echo "❌ You have uncommitted changes. Commit or stash them first:" git status --porcelain echo "" echo "💡 Quick fix: git add . && git commit -m 'WIP' or git stash" return 1 fi # Check for required remotes if ! git remote get-url upstream >/dev/null 2>&1; then echo "❌ No 'upstream' remote found. Add it first:" echo " git remote add upstream <upstream-repo-url>" return 1 fi if ! git remote get-url origin >/dev/null 2>&1; then echo "❌ No 'origin' remote found. Add it first:" echo " git remote add origin <your-fork-url>" return 1 fi local current_branch=$(git branch --show-current) # Ensure we have a main branch locally if ! git show-ref --verify --quiet refs/heads/main; then echo "❌ No local 'main' branch found. Create it first:" echo " git checkout -b main upstream/main" return 1 fi echo "🔄 Syncing fork with upstream..." echo " Current branch: $current_branch" # Fetch with error handling if ! git fetch upstream; then echo "❌ Failed to fetch from upstream. Check network connection and remote URL." return 1 fi echo "📌 Updating local main..." if ! git checkout main; then echo "❌ Failed to checkout main branch" return 1 fi if ! git reset --hard upstream/main; then echo "❌ Failed to reset main to upstream/main" return 1 fi echo "⬆️ Pushing updated main to fork..." if ! git push origin main; then echo "❌ Failed to push main to origin. Check push permissions." return 1 fi echo "🔀 Rebasing feature branch on updated main..." if ! git checkout "$current_branch"; then echo "❌ Failed to checkout $current_branch" return 1 fi if ! git rebase main; then echo "❌ Rebase failed due to conflicts. Resolve them and continue:" echo " 1. Edit conflicted files" echo " 2. git add <resolved-files>" echo " 3. git rebase --continue" echo " Or: git rebase --abort to cancel" return 1 fi echo "✅ Ready to work on branch: $current_branch" } ## Sync fork and push feature branch gsp() { # Safety checks if ! git rev-parse --git-dir >/dev/null 2>&1; then echo "❌ Not in a git repository" return 1 fi local git_dir=$(git rev-parse --git-dir) if [[ -f "$git_dir/rebase-merge/interactive" ]] || [[ -d "$git_dir/rebase-apply" ]] || [[ -f "$git_dir/MERGE_HEAD" ]]; then echo "❌ Git operation in progress. Complete or abort first." return 1 fi if ! git diff-index --quiet HEAD -- 2>/dev/null; then echo "❌ You have uncommitted changes. Commit or stash them first:" git status --porcelain return 1 fi if ! git remote get-url upstream >/dev/null 2>&1; then echo "❌ No 'upstream' remote found" return 1 fi if ! git remote get-url origin >/dev/null 2>&1; then echo "❌ No 'origin' remote found" return 1 fi local current_branch=$(git branch --show-current) # Prevent pushing from main if [[ "$current_branch" == "main" ]]; then echo "❌ Cannot push from main branch. Switch to your feature branch first:" echo " git checkout <your-feature-branch>" return 1 fi # Show what we're about to do echo "⚠️ About to sync and push branch: $current_branch" echo " This will:" echo " • Fetch latest changes from upstream" echo " • Rebase your branch on updated main" echo " • Force-push to your fork (updates PR)" echo "" read -p "Continue? [y/N]: " -n 1 -r echo if [[ ! $REPLY =~ ^[Yy]$ ]]; then echo "❌ Operation cancelled" return 0 fi echo "🔄 Final sync with upstream..." if ! git fetch upstream; then echo "❌ Failed to fetch from upstream" return 1 fi echo "📌 Updating local main..." if ! git checkout main; then echo "❌ Failed to checkout main" return 1 fi if ! git reset --hard upstream/main; then echo "❌ Failed to reset main" return 1 fi if ! git push origin main; then echo "❌ Failed to push main to origin" return 1 fi echo "🔀 Rebasing feature branch..." if ! git checkout "$current_branch"; then echo "❌ Failed to checkout $current_branch" return 1 fi if ! git rebase main; then echo "❌ Rebase failed. Resolve conflicts and try again:" echo " git add <resolved-files> && git rebase --continue" echo " Then run 'gsp' again" return 1 fi echo "🚀 Pushing feature branch to fork..." if ! git push origin "$current_branch" --force-with-lease; then echo "❌ Failed to push to origin. The branch may have been updated." echo " Run 'git pull origin $current_branch' and try again" return 1 fi echo "✅ Feature branch $current_branch successfully pushed to fork" }
Wow - you’ve certainly got a unique perspective on the situation, and I’m grateful that you took the time to share it. Thank you. It’s fascinating to hear from someone who actually worked with the guy.
I can relate to both the Linux struggle and your “I get their PoV but disagree” reaction. Had the same feeling when Kitty’s creator dismissed multiplexers as “a hack” - as a longtime tmux user, that stung. Great tool, but that philosophy never sat right with me. I bounced between most of the more popular terminals for years (Wezterm rocks but has performance issues, Kitty never felt quite right) so I was eager for Ghostty to drop. So far it’s delivered on what I was hoping for (despite needing a minor tweak or two out of the box).
I’m glad you found my last response so helpful. Sounds like exploring alternatives worked out well for you in the end, which is what matters. Cheers. :)
That’s fair, I get the frustration.
I guess I’ve been cutting Mitchell some slack since this is a passion project for him - his goal was to build the modern terminal he always wanted, so an opinionated feature set was always expected. And, new terminals with actual new features need their own terminfo entries, it just comes with the territory. It’ll sort itself out as the databases catch up.
For now, though, you don’t need to address this on an individual host level. I’m in the same boat at work with thousands of servers. If you want to give Ghostty another shot, this wrapper handles the issue automatically, even for servers where AcceptEnv doesn’t include TERM or where SetEnv is disabled:
ssh() { if [[ "$TERM" == "xterm-ghostty" ]]; then TERM=xterm-256color command ssh "$@" else command ssh "$@" fi }
Just drop it in your
.bashrc
(orfunctions.sh
if you rock a modular setup) and SSH connections will auto-switch to compatible terminfo while keeping your local session full-featured. Best of both worlds. ¯\_(ツ)_/¯
Just gotta adjust your TERM value. You can do it per host in your ssh config, if you don’t wanna set it globally.
SetEnv TERM=xterm-256color
I’d recommend using OpenTofu (Terraform) for initial provisioning of VMs and then use Ansible for post-provisioning config & management. That way you’re letting both tools play to their strengths.
https://registry.terraform.io/providers/bpg/proxmox/latest/docs
Did you mean Netbox?
I’ve been wondering about this lately, as I’m unhappily employed but don’t want Indeed to be the only place I window shop.
The challenge is, I’m not really sure what to look for in terms of “good” recruiters. Based on your recent experience, do you have any tips or advice you’d be willing to offer?
arcayne@lemmy.todayto Selfhosted@lemmy.world•Setting Up a Self-Hosted GitHub runner for CI/CDEnglish1·5 个月前Well, yeah, thats why I’m saying if the action isn’t available directly from Forgejo, just write out the full action URL like the example in my last comment and pull it directly from GitHub. Most/all of the actions you’re pulling from Forgejo are originally forked from GitHub anyway. ¯\_(ツ)_/¯
arcayne@lemmy.todayto Selfhosted@lemmy.world•Setting Up a Self-Hosted GitHub runner for CI/CDEnglish1·5 个月前With both Gitea and Forgejo, sometimes you need to hardcode the action URL, like:
https://github.com/actions/setup-java@v4
arcayne@lemmy.todayto Unpopular Opinion@lemmy.world•Overalls are far more comfortable than pantsEnglish1·9 个月前I agree. Years back, when I was getting my CDL in the construction industry, my trainer recommended I get some overalls for comfort. I was in fairly good shape at the time, but man - the relief I felt from not having a belt digging into my gut while behind the wheel made it a lot easier to hop out of the cab and throw chain at a good pace, and I never had to worry about anything coming untucked. Was certainly a game changer.
Does docker, pypi, apt, ansible galaxy, etc. I use it at work as part of our undercloud for OpenStack. It’s the go-to for StackHPC, too.
arcayne@lemmy.todayto Selfhosted@lemmy.world•XPipe - A connection hub for all your servers - Status update for the v12 release - Now with selfhst icons!English2·9 个月前That’s a fair take. The pricing model has changed dramatically since I last looked at it, but at the same time, the dev has obviously put a lot of thought into these changes, so I find it difficult to fault him. He’s gotta make a living somehow.
In general, if someone has more than one Proxmox node to manage, chances are they’ve got some type of homelab, which isn’t exactly the cheapest hobby out there to begin with. If XPipe enhances their experience, I’d say that’s worth a few bucks. If not, they can always git gud in the terminal and do the legwork themselves, but time = $, so…
arcayne@lemmy.todayto Selfhosted@lemmy.world•XPipe - A connection hub for all your servers - Status update for the v12 release - Now with selfhst icons!English8·9 个月前It’s a free tool that is relevant to a lot of users in both of those communities, and because of the support from those communities, the author was able to pivot to working on xPipe full-time. That’s no small feat for a solo dev, and I for one appreciate seeing these updates.
If you decided to devote all your time and energy to a project that was supposed to pay your bills, would you just sit and twiddle your thumbs thinking “if you build it, they will come”? ¯\_(ツ)_/¯
Solid choice. It’s been my go-to DNS+DHCP solution for over 5 years and has never let me down. Also a fan of DNSDist+PowerDNS, but for most environments (especially home/lab), Technitium wins by a mile.
Not sure if it’d fit your use case 100%, but this has been a nice middle ground solution for LE certs in my lab: https://www.certwarden.com/
127.13.37.69:420