3 liens privés
Pour compléter :
La cotisation des familles est plutôt à 300€
C'est vrai qu'en Bretagne c'est particulier, l'école privée ne fait pas de prosélytisme. Maintenant, j'imagine qu'il y a des intégristes partout.
So simple
dd if=/dev/cdrom of=image_name.iso
Young generation collectors
Copy (enabled with -XX:+UseSerialGC)
the serial copy collector, uses one thread to copy surviving objects from Eden to Survivor spaces and between Survivor spaces until it decides they've been there long enough, at which point it copies them into the old generation.PS Scavenge (enabled with -XX:+UseParallelGC)
the parallel scavenge collector, like the Copy collector, but uses multiple threads in parallel and has some knowledge of how the old generation is collected (essentially written to work with the serial and PS old gen collectors).ParNew (enabled with -XX:+UseParNewGC)
the parallel copy collector, like the Copy collector, but uses multiple threads in parallel and has an internal 'callback' that allows an old generation collector to operate on the objects it collects (really written to work with the concurrent collector).G1 Young Generation (enabled with -XX:+UseG1GC)
the garbage first collector, uses the 'Garbage First' algorithm which splits up the heap into lots of smaller spaces, but these are still separated into Eden and Survivor spaces in the young generation for G1.
Old generation collectors
MarkSweepCompact (enabled with -XX:+UseSerialGC)
the serial mark-sweep collector, the daddy of them all, uses a serial (one thread) full mark-sweep garbage collection algorithm, with optional compaction.PS MarkSweep (enabled with -XX:+UseParallelOldGC)
the parallel scavenge mark-sweep collector, parallelised version (i.e. uses multiple threads) of the MarkSweepCompact.ConcurrentMarkSweep (enabled with -XX:+UseConcMarkSweepGC)
the concurrent collector, a garbage collection algorithm that attempts to do most of the garbage collection work in the background without stopping application threads while it works (there are still phases where it has to stop application threads, but these phases are attempted to be kept to a minimum). Note if the concurrent collector fails to keep up with the garbage, it fails over to the serial MarkSweepCompact collector for (just) the next GC.G1 Mixed Generation (enabled with -XX:+UseG1GC)
the garbage first collector, uses the 'Garbage First' algorithm which splits up the heap into lots of smaller spaces.
Pense-bête
SEARCHED=<file>
for f in $(find . -type f -name "*jar"); do echo "Searching in $f: "; jar tvf $f | grep $SEARCHED; done
Autre solution (moins bien)
#!/bin/bash
find . -type f -name "*jar" -exec sh -c '
export FILE
for FILE
do
jar tf $FILE | grep -H --label=$FILE $1
done' sh {} +
Because I lost this many times :
- Go to menu
- Select your menu
- Select your menu item
- Click on Icon
- Select an Icon
- On the opened popup to choose the icon, on the right : hide text
So simple..
And since 10.8 SP1, also do not delete :
<em install dir>/product/enterprisemanager/configuration/org.eclipse.equinox.simpleconfigurator/bundles.info
Ebook
https://fourtoutici.click/
https://ebookchasseur.com/
https://recherche-ebook.fr/
Page pour convertir des unités
Et pour ne plus rechercher :
- G = Zoll = unité de mesure = pouces en général. Souvent égale à DN (Diametre nomina en mm) = NPS (en pouces)
- IG = Innengewinder = Filetage interne
- AG = AuBengewinde = Filetage externe
3/4" (19.04mm) : IG 24,12 cm / AG 26,44 cm : France - 20/27
1" (25,4mm) : IG 30,29 cm / AG 33,24 cm : France - 26/34
1 1/4" (31,75mm) : IG 38,95 cm / AG 41,91 cm : France 33/42
Et aussi :
https://fr.wikipedia.org/wiki/Dimension_des_tuyaux
https://www.mabeo-direct.com/document/A-361618--tableau-des-correspondances-pour-les-raccords
Useful RAID commands :
# Conf mdadm
cat /etc/mdadm.conf
# Current md stats
cat /proc/mdstat
# detail on a MD device :
mdadm --query --detail /dev/md3
# detail on partition
mdadm --examine /dev/sda3
Complete with OVH RAID help
# Copy partition table from safe disk to new disk (for GPT partition - see fdisk -l)
sgdisk -R /dev/newdisk /dev/safedisk
# afterwards, you have to randomize the GUID on the new hard disk to ensure that they are unique (from [howto forge](https://www.howtoforge.com/tutorial/linux-raid-replace-failed-harddisk/))
sgdisk -G /dev/sdb
# check partitions are copied
sgdisk -p /dev/safedisk
sgdisk -p /dev/newdisk
# Add each partition to RAID cluster :
mdadm --manage /dev/mdX2 --add /dev/newdiskX2
mdadm --manage /dev/mdX1 --add /dev/newdiskX1
# follow reconstruction with
mdam --detail /dev/mdX
or
cat /proc/mdstat
# Rebuild Status : 21% complete
Pour ajouter des fichiers git-lfs
- Ajouter des fichiers Examples
git lfs track <mybigfile.jar>
git lfs track "**/*jar"
git lfs track "**/*jar"
git config lfs.fetchinclude "textures,images/foo*"
git config lfs.fetchexclude "media/reallybigfiles"
- Commiter le nouveau tracking
git add --renormalize
git lfs migrate info
git lfs migrate import --include="mybigfile.jar"
git lfs migrate info
- Lister les fichiers LFS
git lfs ls-files -a -s
Pour nettoyer entre autre les historiques des fichiers LFS :
- download jar
- create a clean copy the repo (git clone)
- execute :
java -jar bfg-1.14.0.jar --delete-files database-hpa.tar.gz hpa-portal
git reflog expire --expire=now --all && git gc --prune=now --aggressive
Pire design de réglage de volume, il y en a qui sont vraiment terrible :) Beaucoup de random, mais ce ne sont pas les mieux.
orginalement reddit (bien sur) :
(via évidemment https://sebsauvage.net/links/?e0Quog)
Error in subbash :
export extractor_version=$(cd extractor; mvn help:evaluate -Dexpression=project.version -q -DforceStdout)
result in output contains colors output :
1.0'$'\033''[0m.tar.gz'
Remove it with :
./somescript | sed -r "s/\x1B\[([0-9]{1,3}(;[0-9]{1,2};?)?)?[mGK]//g"
Dump cache values
unbound-control dump_cache
Installation :
(https://memo-linux.com/debian-installer-le-serveur-dns-unbound/)
apt install unbound
cd /var/lib/unbound/
wget ftp://ftp.internic.net/domain/named.cache
mv named.cache root.hints && chown unbound:unbound root.hints
mv /etc/unbound/
unbound.conf.d/root-auto-trust-anchor-file.conf root-auto-trust-anchor-file.conf.original
mkdir /var/log/unbound
chown unbound: /var/log/unbound
# modify apparmor (see at the end)
systemctl restart unbound
Configuration file:
server:
statistics-interval: 0
extended-statistics: yes
statistics-cumulative: yes
verbosity: 3
interface: 127.0.0.1
port: 53
do-ip4: yes
do-ip6: yes
do-udp: yes
do-tcp: yes
access-control: 127.0.0.0/8 allow ## j'autorise mon serveur
access-control: 0.0.0.0/0 refuse ## j'interdis tout le reste de l'Internet !
auto-trust-anchor-file: "/var/lib/unbound/root.key"
root-hints: "/var/lib/unbound/root.hints"
hide-identity: yes
hide-version: yes
harden-glue: yes
harden-dnssec-stripped: yes
use-caps-for-id: yes
cache-min-ttl: 3600
cache-max-ttl: 86400
prefetch: yes
num-threads: 6
msg-cache-slabs: 16
rrset-cache-slabs: 16
infra-cache-slabs: 16
key-cache-slabs: 16
rrset-cache-size: 256m
msg-cache-size: 128m
so-rcvbuf: 1m
unwanted-reply-threshold: 10000
do-not-query-localhost: yes
val-clean-additional: yes
#use-syslog: yes
#val-log-level:2 (0: default, nothing, 2: full)
logfile: /var/log/unbound/unbound.log
harden-dnssec-stripped: yes
cache-min-ttl: 3600
cache-max-ttl: 86400
prefetch: yes
prefetch-key: yes
And an additional apparmor configuration to be able to write in a dedicated file :
(https://b4d.sablun.org/blog/2018-09-27-when-unbound-wont-write-logs/)
vim /etc/apparmor.d/local/usr.sbin.unbound
# Site-specific additions and overrides for usr.sbin.unbound.
# For more details, please see /etc/apparmor.d/local/README.
/var/log/unbound/unbound.log rw,
A tester.
Partage d'écran et prise de commande à distance (dont android)
Sous le coude: Choisir ses clés primaires sous Postgres.
via sebsauvage (https://sebsauvage.net/links/?vtwJpQ)
Open a H2 database :
java -jar com.h2database.h2-2.1.214.jar
or squirrelSQL
add driver with com.h2database.h2-2.1.214.jar.
Default credentials
login : sa
password : password
UPDATE OCTOER 2022
If you want to quickly start the Restore process and don't care about having that option always enabled, then just fire up the Developer console on the browser and run this while on the Restore page
var modelimport = new Ai1wm.Import();
var storage = Ai1wm.Util.random(12);
var options = Ai1wm.Util.form('#ai1wm-backups-form').concat({ name: 'storage', value: storage }).concat({ name: 'archive', value: 'REPLACE-WITH-ARCHIVE-NAME'});
// Set global params
modelimport.setParams(options);
// Start import
modelimport.start();
Limits the used bandwidth, specified in Kbit/s.
scp -l 1000 file user@remote:/path/to/dest/