Program Documentation

documentation.help is a documentation hosting platform specializing in windows documentation files. By converting “.chm” windows help files into html format, they become viewable and searchable on the web so programming questions can be answered more easily. Documentation.help contains a wide variety of help documentation from late 90s help files from pre-Windows XP computers to the latest Python documentation.

netstat usage and flags

netstat -help
netstat: illegal option -- h
Usage: netstat [-AaLlnW] [-f address_family | -p protocol]
netstat [-gilns] [-f address_family]
netstat -i | -I interface [-w wait] [-abdgRtS]
netstat -s [-s] [-f address_family | -p protocol] [-w wait]
netstat -i | -I interface -s [-f address_family | -p protocol]
netstat -m [-m]
netstat -r [-Aaln] [-f address_family]
netstat -rs [-s]

gzip flags

gzip -help
Apple gzip 264.50.1
usage: gzip [-123456789acdfhklLNnqrtVv] [-S .suffix] [ [ ...]]
-1 --fast fastest (worst) compression
-2 .. -8 set compression level
-9 --best best (slowest) compression
-c --stdout write to stdout, keep original files
--to-stdout
-d --decompress uncompress files
--uncompress
-f --force force overwriting & compress links
-h --help display this help
-k --keep don't delete input files during operation
-l --list list compressed file contents
-N --name save or restore original file name and time stamp
-n --no-name don't save original file name or time stamp
-q --quiet output no warnings
-r --recursive recursively compress files in directories
-S .suf use suffix .suf instead of .gz
--suffix .suf
-t --test test compressed file
-V --version display program version
-v --verbose print extra statistics

wget Help – commands, usage, flags, examples

GNU Wget 1.15, a non-interactive network retriever.
Usage: wget [OPTION]... [URL]...

Mandatory arguments to long options are mandatory for short options too.

Startup:
-V, --version display the version of Wget and exit.
-h, --help print this help.
-b, --background go to background after startup.
-e, --execute=COMMAND execute a `.wgetrc'-style command.

Logging and input file:
-o, --output-file=FILE log messages to FILE.
-a, --append-output=FILE append messages to FILE.
-d, --debug print lots of debugging information.
-q, --quiet quiet (no output).
-v, --verbose be verbose (this is the default).
-nv, --no-verbose turn off verboseness, without being quiet.
--report-speed=TYPE Output bandwidth as TYPE. TYPE can be bits.
-i, --input-file=FILE download URLs found in local or external FILE.
-F, --force-html treat input file as HTML.
-B, --base=URL resolves HTML input-file links (-i -F)
relative to URL.
--config=FILE Specify config file to use.

Download:
-t, --tries=NUMBER set number of retries to NUMBER (0 unlimits).
--retry-connrefused retry even if connection is refused.
-O, --output-document=FILE write documents to FILE.
-nc, --no-clobber skip downloads that would download to
existing files (overwriting them).
-c, --continue resume getting a partially-downloaded file.
--progress=TYPE select progress gauge type.
-N, --timestamping don't re-retrieve files unless newer than
local.
--no-use-server-timestamps don't set the local file's timestamp by
the one on the server.
-S, --server-response print server response.
--spider don't download anything.
-T, --timeout=SECONDS set all timeout values to SECONDS.
--dns-timeout=SECS set the DNS lookup timeout to SECS.
--connect-timeout=SECS set the connect timeout to SECS.
--read-timeout=SECS set the read timeout to SECS.
-w, --wait=SECONDS wait SECONDS between retrievals.
--waitretry=SECONDS wait 1..SECONDS between retries of a retrieval.
--random-wait wait from 0.5*WAIT...1.5*WAIT secs between retrievals.
--no-proxy explicitly turn off proxy.
-Q, --quota=NUMBER set retrieval quota to NUMBER.
--bind-address=ADDRESS bind to ADDRESS (hostname or IP) on local host.
--limit-rate=RATE limit download rate to RATE.
--no-dns-cache disable caching DNS lookups.
--restrict-file-names=OS restrict chars in file names to ones OS allows.
--ignore-case ignore case when matching files/directories.
-4, --inet4-only connect only to IPv4 addresses.
-6, --inet6-only connect only to IPv6 addresses.
--prefer-family=FAMILY connect first to addresses of specified family,
one of IPv6, IPv4, or none.
--user=USER set both ftp and http user to USER.
--password=PASS set both ftp and http password to PASS.
--ask-password prompt for passwords.
--no-iri turn off IRI support.
--local-encoding=ENC use ENC as the local encoding for IRIs.
--remote-encoding=ENC use ENC as the default remote encoding.
--unlink remove file before clobber.

Directories:
-nd, --no-directories don't create directories.
-x, --force-directories force creation of directories.
-nH, --no-host-directories don't create host directories.
--protocol-directories use protocol name in directories.
-P, --directory-prefix=PREFIX save files to PREFIX/...
--cut-dirs=NUMBER ignore NUMBER remote directory components.

HTTP options:
--http-user=USER set http user to USER.
--http-password=PASS set http password to PASS.
--no-cache disallow server-cached data.
--default-page=NAME Change the default page name (normally
this is `index.html'.).
-E, --adjust-extension save HTML/CSS documents with proper extensions.
--ignore-length ignore `Content-Length' header field.
--header=STRING insert STRING among the headers.
--max-redirect maximum redirections allowed per page.
--proxy-user=USER set USER as proxy username.
--proxy-password=PASS set PASS as proxy password.
--referer=URL include `Referer: URL' header in HTTP request.
--save-headers save the HTTP headers to file.
-U, --user-agent=AGENT identify as AGENT instead of Wget/VERSION.
--no-http-keep-alive disable HTTP keep-alive (persistent connections).
--no-cookies don't use cookies.
--load-cookies=FILE load cookies from FILE before session.
--save-cookies=FILE save cookies to FILE after session.
--keep-session-cookies load and save session (non-permanent) cookies.
--post-data=STRING use the POST method; send STRING as the data.
--post-file=FILE use the POST method; send contents of FILE.
--method=HTTPMethod use method "HTTPMethod" in the header.
--body-data=STRING Send STRING as data. --method MUST be set.
--body-file=FILE Send contents of FILE. --method MUST be set.
--content-disposition honor the Content-Disposition header when
choosing local file names (EXPERIMENTAL).
--content-on-error output the received content on server errors.
--auth-no-challenge send Basic HTTP authentication information
without first waiting for the server's
challenge.

HTTPS (SSL/TLS) options:
--secure-protocol=PR choose secure protocol, one of auto, SSLv2,
SSLv3, TLSv1 and PFS.
--https-only only follow secure HTTPS links
--no-check-certificate don't validate the server's certificate.
--certificate=FILE client certificate file.
--certificate-type=TYPE client certificate type, PEM or DER.
--private-key=FILE private key file.
--private-key-type=TYPE private key type, PEM or DER.
--ca-certificate=FILE file with the bundle of CA's.
--ca-directory=DIR directory where hash list of CA's is stored.
--random-file=FILE file with random data for seeding the SSL PRNG.
--egd-file=FILE file naming the EGD socket with random data.

FTP options:
--ftp-user=USER set ftp user to USER.
--ftp-password=PASS set ftp password to PASS.
--no-remove-listing don't remove `.listing' files.
--no-glob turn off FTP file name globbing.
--no-passive-ftp disable the "passive" transfer mode.
--preserve-permissions preserve remote file permissions.
--retr-symlinks when recursing, get linked-to files (not dir).

WARC options:
--warc-file=FILENAME save request/response data to a .warc.gz file.
--warc-header=STRING insert STRING into the warcinfo record.
--warc-max-size=NUMBER set maximum size of WARC files to NUMBER.
--warc-cdx write CDX index files.
--warc-dedup=FILENAME do not store records listed in this CDX file.
--no-warc-compression do not compress WARC files with GZIP.
--no-warc-digests do not calculate SHA1 digests.
--no-warc-keep-log do not store the log file in a WARC record.
--warc-tempdir=DIRECTORY location for temporary files created by the
WARC writer.

Recursive download:
-r, --recursive specify recursive download.
-l, --level=NUMBER maximum recursion depth (inf or 0 for infinite).
--delete-after delete files locally after downloading them.
-k, --convert-links make links in downloaded HTML or CSS point to
local files.
--backups=N before writing file X, rotate up to N backup files.
-K, --backup-converted before converting file X, back up as X.orig.
-m, --mirror shortcut for -N -r -l inf --no-remove-listing.
-p, --page-requisites get all images, etc. needed to display HTML page.
--strict-comments turn on strict (SGML) handling of HTML comments.

Recursive accept/reject:
-A, --accept=LIST comma-separated list of accepted extensions.
-R, --reject=LIST comma-separated list of rejected extensions.
--accept-regex=REGEX regex matching accepted URLs.
--reject-regex=REGEX regex matching rejected URLs.
--regex-type=TYPE regex type (posix).
-D, --domains=LIST comma-separated list of accepted domains.
--exclude-domains=LIST comma-separated list of rejected domains.
--follow-ftp follow FTP links from HTML documents.
--follow-tags=LIST comma-separated list of followed HTML tags.
--ignore-tags=LIST comma-separated list of ignored HTML tags.
-H, --span-hosts go to foreign hosts when recursive.
-L, --relative follow relative links only.
-I, --include-directories=LIST list of allowed directories.
--trust-server-names use the name specified by the redirection
url last component.
-X, --exclude-directories=LIST list of excluded directories.
-np, --no-parent don't ascend to the parent directory.

Mail bug reports and suggestions to .

VIM (Vi IMproved) – Usage, Flags, Command Examples

vim Usage

Edit a file: vim test.txt

vim -help


VIM - Vi IMproved 7.4 (2013 Aug 10, compiled Apr 4 2017 18:14:54)

usage: vim [arguments] [file ..] edit specified file(s)
or: vim [arguments] - read text from stdin
or: vim [arguments] -t tag edit file where tag is defined
or: vim [arguments] -q [errorfile] edit file with first error

Arguments:
-- Only file names after this
-v Vi mode (like "vi")
-e Ex mode (like "ex")
-E Improved Ex mode
-s Silent (batch) mode (only for "ex")
-d Diff mode (like "vimdiff")
-y Easy mode (like "evim", modeless)
-R Readonly mode (like "view")
-Z Restricted mode (like "rvim")
-m Modifications (writing files) not allowed
-M Modifications in text not allowed
-b Binary mode
-l Lisp mode
-C Compatible with Vi: 'compatible'
-N Not fully Vi compatible: 'nocompatible'
-V[N][fname] Be verbose [level N] [log messages to fname]
-D Debugging mode
-n No swap file, use memory only
-r List swap files and exit
-r (with file name) Recover crashed session
-L Same as -r
-T Set terminal type to
-u Use instead of any .vimrc
--noplugin Don't load plugin scripts
-p[N] Open N tab pages (default: one for each file)
-o[N] Open N windows (default: one for each file)
-O[N] Like -o but split vertically
+ Start at end of file
+ Start at line
--cmd Execute before loading any vimrc file
-c Execute after loading the first file
-S Source file after loading the first file
-s Read Normal mode commands from file
-w Append all typed commands to file
-W
Write all typed commands to file
-x Edit encrypted files
--startuptime Write startup timing messages to
-i Use instead of .viminfo
-h or --help Print Help (this message) and exit
--version Print version information and exit

End-of-central-directory signature not found.

If you are receiving this error, it’s probably the case the zip file has been truncated before it ends.

End-of-central-directory signature not found. Either this file is not
a zipfile, or it constitutes one disk of a multi-part archive. In the
latter case the central directory and zipfile comment will be found on
the last disk(s) of this archive.

HP True Graphics / True 3D

Brace yourself for breakthrough loss-less video and graphics presentation from the cloud with HP True Graphics3 for HP Thin Clients, which offloads rich multimedia content to your GPU so your CPU can boost the efficiency of your infrastructure and scale easily for heavy compute demands.

  • Enhanced HD video and 3D graphics presentation
  • Improved, real-time mouse and keyboard interactivity
  • Up to 2x the average frame rates for streaming4
  • As low as 1/3rd of typical thin client CPU usage4
  • Low equipment burden, reduced IT maintenance and less codec updates


From HP’s brochure on True Graphics:

Experience breakthrough video playback, smooth graphics rendering, and impressive high-speed performance with less lag time when you access rich graphical applications and multimedia content from the cloud with HP True Graphics1 for Windows®–and Linux®-based HP Thin Clients.

Optimized for your end users

View and manipulate cloud-based applications and multimedia content with smooth display and reduced lag. Enjoy high frame rates for large format applications, multi-video chat, and demanding programs like HD video, CAD, html5, and Silverlight.

Form factor friendly

Deploy a flexible solution that works with your Windows®– or HP ThinPro–based thin clients. Efficient performance distribution Power through more apps, redirect H.264 content straight to your GPU HW decoder, and add heavy compute programs to your current environment with a solution that off-loads tasks from the CPU.

Real-time interactivity

Get immediate keyboard and mouse responsiveness from your most demanding programs without the hassle of stream-breaks or mouse lags in your open application windows. IT-free streaming Say goodbye to local codec playback issues with software that allows your users to instantly view rich content so your IT can get back to business-critical tasks.

Double the HP technology

Improve network performance and the end user experience, optimize network traffic for remote desktops and remote app streaming, and supercharge your data and rich content with the combined force of HP Velocity and HP True Graphics.

No additional cost, no kidding

HP True Graphics1 is included on select HP Thin Clients and available as a simple, free download for your current Windows®–, HP ThinPro– or HP Smart Zero-based HP Thin Client running Citrix®. Even better, no training is required.

 

Without HP True Graphics

Challenging for rich graphics and video presentation in the cloud In today’s remote cloud-computing environments, smooth streaming and high-quality presentation of HD video and 3D graphics are significant hurdles. IT and end users often experience poor graphics, slow frame rates, lag, heavy client CPU burden, and codec mismatches. Rich content remote redirection requires a strong server, a high bandwidth network to transmit the data, and an extremely powerful endpoint device to render the content. Current software solutions that attempt a resolution are extremely client CPU intensive and require high-performance endpoint devices. Today’s high-performance endpoint devices, like PCs and workstations, aren’t as secure and reliable as a thin client in VDI and cloud computing environments. The thin client demand The industry solution has typically been multimedia offloading and server side rendering, yet each solution gives thin client users just enough capability for average quality video and graphics in a limited number of file formats. With increased industry adoption of H.264 as a popular format, the demand has grown stronger than ever for a more flexible, cost-effective and efficient solution.

With HP True Graphics

Truly astounding cloud-based multimedia HP True Graphics1 gives HP Thin Client users the ability to view and manipulate cloud-based applications and multimedia content with smooth display and reduced lag. Enjoy high frame rates for large format applications, multi-video chat, and demanding programs like HD video, CAD, HTML5, and Silverlight.

Key benefits

  • Breakthrough video playback
  • View and manipulate cloud-based applications and multimedia content
  • Flexible solution that works with your Windows® – or HP ThinPro-based thin clients
  • Redirect H.264 content straight to your GPU HW decoder
  • Immediate keyboard and mouse responsiveness
  • Low equipment burden and CPU utilization

HP True Graphics provides up to 2x the average frame rate at as low as 1/3rd of the typical thin client CPU utilization and with a single, simplified codec-based scheme.2 This technology drastically reduces high client CPU usage that often can’t keep up with today’s heavy content demands resulting in a more responsive end-user experience and a more efficient IT environment.

HP-True-Graphics-Requirements [pdf]

HP Velocity

HP Velocity is a software solution that improves the user experience for remote desktop and virtualized applications by addressing common network bottlenecks, such as packet loss, network latency and WiFi congestion. HP Velocity provides the greatest performance improvement for remote and branch offices, teleworkers, WiFi, and 3G/4G.

For the technical overview of how HP Velocity works, see this document.

 

Google Search Console Definitions

Google Webmaster Tools / Search Console Definitions

Errors:

Server error (5xx): Your server returned a 500-level error when the page was requested.

Redirect error: The URL was a redirect error. Could be one of the following types: it was a redirect chain that was too long; it was a redirect loop; the redirect URL eventually exceeded the max URL length; there was a bad or empty URL in the redirect chain.

Submitted URL blocked by robots.txt: You submitted this page for indexing, but the page is blocked by robots.txt. Try testing your page using the robots.txt tester.

Submitted URL marked ‘noindex’: You submitted this page for indexing, but the page has a ‘noindex’ directive either in a meta tag or HTTP response. If you want this page to be indexed, you must remove the tag or HTTP response.

Submitted URL seems to be a Soft 404: You submitted this page for indexing, but the server returned what seems to be a soft 404.

Submitted URL returns unauthorized request (401):You submitted this page for indexing, but Google got a 401 (not authorized) response. Either remove authorization requirements for this page, or else allow Googlebot to access your pages by verifying its identity.

Submitted URL not found (404): You submitted a non-existent URL for indexing.

Submitted URL has crawl issue: You submitted this page for indexing, and Google encountered an unspecified crawling error that doesn’t fall into any of the other reasons. Try debugging your page using Fetch as Google.

Warnings:

Indexed, though blocked by robots.txt: The page was indexed, despite being blocked by robots.txt (Google always respects robots.txt, but this doesn’t help if someone else links to it). This is marked as a warning because we’re not sure if you intended to block the page from search results. If you do want to block this page,robots.txt is not the correct mechanism to avoid being indexed. To avoid being indexed you should either use ‘noindex’ or prohibit anonymous access to the page using auth. You can use the robots.txt tester to determine which rule is blocking this page. Because of the robots.txt, any snippet shown for the page will probably be sub-optimal. If you do not want to block this page,update your robots.txt file to unblock your page.

Valid:

Submitted and indexed: You submitted the URL for indexing, and it was indexed.

Indexed, not submitted in sitemap: The URL was discovered by Google and indexed. We recommend submitting all important URLs using a sitemap.

Indexed; consider marking as canonical: The URL was indexed. Because it has duplicate URLs, we recommend explicitly marking this URL as canonical.

 Excluded:

Blocked by ‘noindex’ tag: When Google tried to index the page it encountered a ‘noindex’ directive, and therefore did not index it. If you do not want the page indexed, you have done so correctly. If you do want this page to be indexed, you should remove that ‘noindex’ directive.

Blocked by page removal tool: The page is currently blocked by a URL removal request. Removal requests are only good for a specified period of time (see the linked documentation). After that period, Googlebot may go back and index the page, even if you do not submit another index request. If you do not want the page to be indexed, use ‘noindex’, require authorization for the page, or remove the page. If you are a verified site owner, you can use the URL removals tool to see who submitted a URL removal request.

Blocked by robots.txt: This page was blocked to Googlebot with a robots.txt file. You can verify this using the robots.txt testerNote that this does not mean that the page won’t be indexed through some other means. If Google can find other information about this page without loading it, the page could still be indexed (though this is less common). To ensure that a page is not indexed by Google, remove the robots.txt block and use a ‘noindex’ directive.

Blocked due to unauthorized request (401): The page was blocked to Googlebot by a request for authorization (401 response). If you do want Googlebot to be able to crawl this page, either remove authorization requirements, or allow Googlebot to access your pages by verifying its identity.

Crawl anomaly: An unspecified anomaly occurred when fetching this URL. This could mean a 4xx- or 5xx-level response code; try fetching the page using Fetch as Google to see if it encounters any fetch issues. The page was not indexed.

Crawled – currently not indexed: The page was crawled by Google, but not indexed. It may or may not be indexed in the future; no need to resubmit this URL for crawling.

Discovered – currently not indexed: The page was found by Google, but not crawled yet.

Alternate page with proper canonical tag: This page is a duplicate of a page that Google recognizes as canonical, and it correctly points to that canonical page, so nothing for you to do here!

Duplicate page without canonical tag: This page has duplicates, none of which is marked canonical. We think this page is not the canonical one. You should explicitly mark the canonical for this page. To learn which page is the canonical, click the table row to run an info: query for this URL, which should list its canonical page.

Duplicate non-HTML page: A non-HTML page (for example, a PDF file) is a duplicate of another page that Google has marked as canonical. Typically only the canonical URL will be shown in Google Search. If you like, you can specify a canonical page using the Link HTTP header in a response.

Google chose different canonical than user: This URL is marked as canonical for a set of pages, but Google thinks another URL makes a better canonical. Because we consider this page a duplicate, we did not index it; only the canonical page is indexed. We recommend that you explicitly mark this page as a duplicate of the canonical URL. To learn which page is the canonical, click the table row to run an info: query for this URL, which should list its canonical page.

Not found (404): This page returned a 404 error when requested. The URL was discovered by Google without any explicit request to be crawled. Google could have learned of the URL through different ways: for example, another page links to it, or it existed previously and was deleted. Googlebot will probably continue to try this URL for some period of time; there is no way to tell Googlebot to permanently forget a URL, although it will crawl it less and less often. 404 responses are not a problem, if intentional. If your page has moved, use a 301 redirect to the new location. Read here to learn more about how to think about 404 errors on your site.

Page removed because of legal complaint: The page was removed from the index because of a legal complaint.

Page with redirect: The URL is a redirect, and therefore was not added to the index.

Queued for crawling: The page is in the crawling queue; check back in a few days to see if it has been crawled.

Soft 404: The page request returns what we think is a soft 404 response. This means that it returns a user-friendly “not found” message without a corresponding 404 response code. We recommend returning a 404 response code for “not found” pages to prevent indexing of the page.

Submitted URL dropped: You submitted this page for indexing, but it was dropped from the index for an unspecified reason.

Submitted URL not selected as canonical: The URL is one of a set of duplicate URLs without an explicitly marked canonical page. You explicitly asked this URL to be indexed, but because it is a duplicate, and Google thinks that another URL is a better candidate for canonical, Google did not index this URL. Instead, we indexed the canonical that we selected. The difference between this status and “Google chose different canonical than user” is that, in this case, you explicitly requested indexing.