From ea4501c6db43aa411ed5a09691192918074b1ecd Mon Sep 17 00:00:00 2001 From: Maurizio Branca Date: Mon, 28 Oct 2024 19:07:46 +0100 Subject: [PATCH 01/19] Initial import from elastic-package create package --- packages/azure_logs/LICENSE.txt | 202 ++++++++++++++++++ packages/azure_logs/agent/input/input.yml.hbs | 51 +++++ packages/azure_logs/changelog.yml | 6 + packages/azure_logs/docs/README.md | 84 ++++++++ packages/azure_logs/fields/base-fields.yml | 12 ++ packages/azure_logs/img/sample-logo.svg | 1 + packages/azure_logs/img/sample-screenshot.png | Bin 0 -> 18849 bytes packages/azure_logs/manifest.yml | 105 +++++++++ 8 files changed, 461 insertions(+) create mode 100644 packages/azure_logs/LICENSE.txt create mode 100644 packages/azure_logs/agent/input/input.yml.hbs create mode 100644 packages/azure_logs/changelog.yml create mode 100644 packages/azure_logs/docs/README.md create mode 100644 packages/azure_logs/fields/base-fields.yml create mode 100644 packages/azure_logs/img/sample-logo.svg create mode 100644 packages/azure_logs/img/sample-screenshot.png create mode 100644 packages/azure_logs/manifest.yml diff --git a/packages/azure_logs/LICENSE.txt b/packages/azure_logs/LICENSE.txt new file mode 100644 index 000000000000..d64569567334 --- /dev/null +++ b/packages/azure_logs/LICENSE.txt @@ -0,0 +1,202 @@ + + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + + APPENDIX: How to apply the Apache License to your work. + + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "[]" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. + + Copyright [yyyy] [name of copyright owner] + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. diff --git a/packages/azure_logs/agent/input/input.yml.hbs b/packages/azure_logs/agent/input/input.yml.hbs new file mode 100644 index 000000000000..40c0dd217007 --- /dev/null +++ b/packages/azure_logs/agent/input/input.yml.hbs @@ -0,0 +1,51 @@ +{{#if connection_string}} +connection_string: {{connection_string}} +{{/if}} +{{#if storage_account_container }} +storage_account_container: {{storage_account_container}} +{{else}} +{{#if eventhub}} +storage_account_container: azure-eventhub-input-{{eventhub}} +{{/if}} +{{/if}} +{{#if eventhub}} +eventhub: {{eventhub}} +{{/if}} +{{#if consumer_group}} +consumer_group: {{consumer_group}} +{{/if}} +{{#if storage_account}} +storage_account: {{storage_account}} +{{/if}} +{{#if storage_account_key}} +storage_account_key: {{storage_account_key}} +{{/if}} +{{#if resource_manager_endpoint}} +resource_manager_endpoint: {{resource_manager_endpoint}} +{{/if}} +data_stream: + dataset: {{data_stream.dataset}} +tags: +{{#if preserve_original_event}} + - preserve_original_event +{{/if}} +{{#if parse_message}} + - parse_message +{{/if}} +{{#each tags as |tag i|}} + - {{tag}} +{{/each}} +{{#contains "forwarded" tags}} +publisher_pipeline.disable_host: true +{{/contains}} +{{#if processors}} +processors: +{{processors}} +{{/if}} +sanitize_options: +{{#if sanitize_newlines}} + - NEW_LINES +{{/if}} +{{#if sanitize_singlequotes}} + - SINGLE_QUOTES +{{/if}} \ No newline at end of file diff --git a/packages/azure_logs/changelog.yml b/packages/azure_logs/changelog.yml new file mode 100644 index 000000000000..bde20454c815 --- /dev/null +++ b/packages/azure_logs/changelog.yml @@ -0,0 +1,6 @@ +# newer versions go on top +- version: "0.1.0+build0002" + changes: + - description: Initial draft of the package + type: enhancement + link: https://github.com/elastic/integrations/pull/1 # FIXME Replace with the real PR link diff --git a/packages/azure_logs/docs/README.md b/packages/azure_logs/docs/README.md new file mode 100644 index 000000000000..badcc8402399 --- /dev/null +++ b/packages/azure_logs/docs/README.md @@ -0,0 +1,84 @@ + + + +# Custom Azure Logs Input + + + +## Data streams + + + + + + + + + + + +## Requirements + +You need Elasticsearch for storing and searching your data and Kibana for visualizing and managing it. +You can use our hosted Elasticsearch Service on Elastic Cloud, which is recommended, or self-manage the Elastic Stack on your own hardware. + + + +## Setup + + + +For step-by-step instructions on how to set up an integration, see the +[Getting started](https://www.elastic.co/guide/en/welcome-to-elastic/current/getting-started-observability.html) guide. + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/packages/azure_logs/fields/base-fields.yml b/packages/azure_logs/fields/base-fields.yml new file mode 100644 index 000000000000..7c798f4534ca --- /dev/null +++ b/packages/azure_logs/fields/base-fields.yml @@ -0,0 +1,12 @@ +- name: data_stream.type + type: constant_keyword + description: Data stream type. +- name: data_stream.dataset + type: constant_keyword + description: Data stream dataset. +- name: data_stream.namespace + type: constant_keyword + description: Data stream namespace. +- name: '@timestamp' + type: date + description: Event timestamp. diff --git a/packages/azure_logs/img/sample-logo.svg b/packages/azure_logs/img/sample-logo.svg new file mode 100644 index 000000000000..6268dd88f3b3 --- /dev/null +++ b/packages/azure_logs/img/sample-logo.svg @@ -0,0 +1 @@ + \ No newline at end of file diff --git a/packages/azure_logs/img/sample-screenshot.png b/packages/azure_logs/img/sample-screenshot.png new file mode 100644 index 0000000000000000000000000000000000000000..d7a56a3ecc078c38636698cefba33f86291dd178 GIT binary patch literal 18849 zcmeEu^S~#!E#4Tq;}?6chqwB{?k=6jc5D4>l%v(rleJ2Y%tW zDj9g7px}|*e;{M?LDwiK3@FNS(lDRTd-MJYIyUJCN948~OJk1M(DrJyI#iV;P4k~& zFZo35IfQt0RwlUN`48^6(1dv_wm(y1xhEdMld=Y?!%u=fPT_*{3( zwBwz3#qR}_)t>C*jp5@U)Ti~B)Y;qq*TRxZJ7ZRN_^A3TDAEM*@7Ve%(Ro7=1%1B< zVj6GBUTxXev>_^SFA zgKZ=g4aTS}9>Ofj7cSB0WO?gQ)x=+!hs_)b$6#>ScFZ>XAoIX)%Bc|BDC~JFBk0f0 z0NY}6gb)&!qx^FWC(!ji+Kl$V$2|ocA=vN0TM0Y`U?tX+T)c*C zA!IL(T2Vm%MCLa85^if@J@Kkprx8QN5!6eCR@4Oa5S?4-4|ou?90mFCM8D!;n(5xz zO}-*t!TntN>|a$s(kGQg1P-U?hqvGF2_fGvd&~yZ_l3Qf&j~XWa=;>N3#-~#zjzcc z*m18L`A-K2o!d@J>a8SRbm4P&-q1(H>|JgIymDbnJF&@008`=X!P?4DGgZb>voUl^ zNJKgPR4S={)3vuk_{n@=M8q;;aJL>q+VLdTnO=}`&x;1DKjJA3*f*idS{jP5?+;!W zn-^7021Z4zv`Aq`hmX1aid997RNh3fa-@PG(W7TzKa1W&5^y3|lPeETP7j9qXpo4)7%(W0_2 z^Nmq;t@rb1eP3?%kOkH`P%!zTC7ZHjSfNN3*Sb#=3#jB*KpNGNfnRZ{N(6DrW(;B2Bwom<%m?VQP%K+ zsFeF1-(DY}oP@)w^Kw~gPg03q?N;)Ec6^|nikA34T~RynX*z}H>R~qgT$`Zbhn8wzZs$j2fsGN&rOK-mIBBvzD@a8FgbLpL!h5N^u&0wG} zq!#md3MHITv?3@$37J?lc_5*LWJTTjel;IiU-Yq;(g9I^D&KN_NKVS0O~GvB~FzPM6}=4d%fG4Nw4pZshcyLqK@`b8?RhD38haIyr@+8+0r5TC1*C7^WleJ zZN3_ngTD#RQvNL*;qD2H@cBWJbCC#d!}=oKfod5SE9a?!?j%DVt1z@inN}Iy$r+96 zM@P?AC+(`cM;z6J94BYGJ;+P-N#yj$?`G26ydS&OVH?~JY(N4l()Fh+x+DoJ@r<+i zhm^ck@QP`=fLApr62@KyOef~}zuG;(VbDQmw|Wb+oSHSw=%w9R)=et0cY*~ytX)#M zEXlK^p;zM@vTnXn+C1vwP)~TJv|TvDE2($;;EzC5_5IL#H;u z)#CO8)TSzbt8)wHB8$I8KcIojx&GoE)3QNu{CQ+_xBmQ&`mL5-u=BX(hs^hMY^ zae!!*Q;Tr$@(0~GoBJAohGw*d{l8~!aXop87aaSUb2jm)Tk>#$1*cdo5Sl+?oD!l4Og~yX+soottl4 zp4OartUuAN(dD~yLJ}`A1*!D4-|L^hM;`_DM^1KYs-VF(}h(BjRO``b+xV~%O=-)?p z7ciJH7Fnl?V&=ay_AB{oQoa2iR;6$^tiE|-eRCFy|3F@%j#6gUxkZX@?K`F$u#;T< z4IZORpUthmB?U`;zrOkp?P(Rvd5TFRWrBJmVg;KEZvJ+;Q}FRY%QZ?c^&$oPXW+C5 zdN#c>v%U?QuE+hMQdzxS1Q(BT90;29qu#^A?a^)Ui;{TJ;%`nLgm2ew$J4NvREjCJ z$`C7&?tH$CrVG@M3J1-KJw_*9BKeL*JX{ zN+Vg_TXb9^jJO$ZGkXO6BBFDjt~w5`w2TB*z$&1W5Il3IiDs=ZMDt|9iRtKET*wF6 z0Z+|N87p-5Fh)^(*l>OVr5^aY5LW(@PuM>Qo@&)yj6XRkPm1>eTF#Y_c*aRF^ZY5A z9FAU7lKEHG@i{wJMPg;n6z2|69d-)q9@<7t()d-zPy&X zdXG7{Uw{k23)CzzQAXw#iqj<1u~W@K_Ljc#?ukh;fRKHeJ2l~Z+52b2n^bGiDF2oX zm25FLx|4AP8>rAi@koY03lrtS#X?zK591c?2iZ_jjc>0y>q9>fU<08o6zG%z9WK+S zDwZMW4~28wu#ye#V*@#5t^S@NiAA`3{SF$xINmc_WW^u-C9M=H>RQ1>WM=|R!660{ z6E6%DwX`eu<3pkmz7Z=FCRd$(vhDkc3yMnSr)5C*aho)DZ<12$`$TXj<8Z70)|rK7 zXFD8QzksfWZU`qL2K8X{C~TcF{KVW`3Y{IMb&)T9%1V`tv(HY1 z+LXkLyM|3mtLD{x-#hOw-U?sr-iLeHFA|=-sGZ4#hX)atL!a91(tWJc+og&5W}VfZ zpgE7`{5D`~?yGR++y7~xA&eU0N*ZezDjF$> zUeK&1aTFQRg*?v^Z2e7u<`lk$czR6}b6Cl-qA9%A`#A6q0*zyTu)X`3rhjR86NK3= zLdw{+-F}+b2gxd-qF7>Rla}dFkj|L#c|pg5Ni+MRA|BZH(@ME*o<1ijKcoXb%PVfJ ztp_uf=G%kvU((pHcw90Xut=}atA!giM-5By)f40nKp zv7Wdb{;^<}VRvruH~rYr~wEuYY2ov-5Q|p@u3Da9+z7PeIpBAwi?RxnxN3Kt+N9L(LUS%wxY` z>e&1VV;{CYw8DNRlvBH)>!I49SU4R!t3I4=y;mCevPZh!-}~G+F>6hcL_Rli4r zC4(WN)`j$>^S=~GMGR=^)A6wrqi(-x{xK37&Vx!OS6t=KQ2JVZo#GrSODtTe=TVh%*qfF%91nqsMNLNL^Gp|_ zz%I*HUkMQGqb!1eh{{bp|0GSCDbkG_D_d)8<(0r<6-%Qi7qDa7xZjcdZ$?Rth9L!f z$erCcs3<~mtupywbaT8NWZF#v?iZkvqSz3@p`RiXs7P!GUa~-U9hEG(NgI#3BzO-# z!9JWf(;r!*A=@g$f}>wi|6Q@9z8AmYf~x8G%sp>C5cfuJY;hs1o3Ozu^{pH0AFbs%yU)Xy5>Cf?qXiHn*-PAfKDRiy`U0sFSKFsgEZ6_ z9#ma!<#Izr^}_z*>PRSt564u6We*XmZUx^jv*dK; z4zyFZ*ZFSE!00<6!|+#33&R)@RA8V9YRjp$HS9?CGq*xDSDRbX#i;}mateEF{fqTI zt?X}Efkq_Ap*_ETgaikOBbQ|;47}hwX44K`(DUI@C)QiG&6UJ1UmRn*Q@6%e`+x(gpQp74O{;yli8YLCV}qD z4gIyZd_(8ED~WWaeXOb0^r=9=AiDT}by~+$KVF~M{ywbQl zng-h?a_E;yX?DCr4|_h7JMc7>xgWf7Ek-VmH^hCYunVp3{(d{---&%-GZ=rK#V5Jo zJvP8b!2AA5?9)G8gwzB6ze3TU<5*Pqms^Q-?C9-CN~4hb-`U0D@kAkTWn23``cao^ z8IWAp8h7`%ZA+eI?w$sJktq5m>e&0@mQn>2BdpKAxbj1$m$8Z;`!iFvl9($Lb9Ff? zT^6cTZ~HgIeR6R*;G(rzpgsJP41Fx9Df;G6{;k6T(i}&8hX(jHSC@~#X@70h#)g(( z*9vUC+a*b%oAdf1$}Z3NR;|c5nY4^Z51pfqk(tmJbB;Q#ka#tf5eae;-kq$I{xO3<(TI$0lSe-JQzJ*es;il=Kn_?&?E zfLbs{qErPqm)-*ZfwbA*D-shgb|1;X;cH*yA|q8gS=HiosF=-kbdk6--SR+`F^H_` z0*i`J==@XSe=HT;_``G}ulE=H@*3GU*?gVd@h*`eT^GKjI;C@8+h~;(u3bA#b&bN{ zYw>dJ$(;RfHDLlndS`CWOE=g0jOocCc&;w(dOzrLf4-DK*MD@P_;u&CbfMw=#Q-B` zDq8hGwKN-O7(hQA_bP3f5XrZH+@*FGw~ppmDgNWcf|Lf*Pc%e5dw1DcJ1BWm!z7z3 zr^toEU*P(>G#;_1X}Rz(5lbDtCui%hY^d3lm)kw0vyk zX~K4$AG#7cG`6s2%9g9zsaQ9o?;3yzW4Pt!;NlS zzI#G7tiq&@eV&}qDtY(e$1JwscAfle%Al{3>Nr%``n?`Jac^CdOXUbFgI3;m{RkA~ zokl+lxuw9=%W&MmzA+G%ZdFMMP&N2^6BWjG2Lt|xKx)lMCR@b0n+xgw<)&Dwi?}>- z+$_e|@M;uW@3z6)q&L7bYitZ%huzGqH_qHOr&G5o!?(8TJv_MN1ka|&c6_!Q>#PgHSFoPWiLg|k_{ zQd#Zy&BPkU(0OE5S35!B5qb6%T3Wd#J(zBl8dw6I#xIDDF-LBPi-jXv1E?!gE|1OIdTejK)+U3ooC^otSIRsWZf-`&K}6}s!407Y58zH zK(oYx*7sN1O|Z_1YIJS_H$E@DH(hB4QKNCGQT3PTvwYoe2&8WKi5`5tU-r4!>_V3XUT}N)>8V;+z-!@-IGCKiD>E9RC(K`NMx=;Qp zf$2g^t?)zpU0L!BZi(oE#)^Z_biT*Svh>r#%1=O+Wo37G`Q)4@k#Pe?^mgBIugC)8 zyEICH=`{A~^x#X&%tr-$j|(nXrIrGQYNY+C3M+LO;yUU4-|v>a5#P)XYp>_|C0f0n{_p0mvwWmghfd%!Cm}$qBDxOqA3htLs~ghSA1>6^dVgd~ zVHHBBy6;Pp=El;dkTE=ttp~BoOJ$L@EB3Z37T1kTNG3tm4PY5O-7hP5DA$-k=vV&6 z?RiAm;W~*o)R7!x9>u$&@|&D4xMmJ*y+^-6t!F0u8G~78t&Bs#W>w_NbW>W9M3tXWXRf zI86FWVx%iXXh6MJ>dg#?lNu{K@S#nzMIG4PXQd%!Bvc*H0c7F_Y=adptJr*cHevMQ z%?Xu~q8CFw>^L*S_83kVhq=)hf0%_Lq}SE*g(Da_A{kXVZfAd*YCwp~bG32wi&SNM z#QZ7}Ug5-=+s^uqAh_|}gzya<(&E?XAZ%0ybd9nraj?|z1YfPr*{N?Q{ji}YG`T#| z=uwJZHIMlsmevnenT#-)t$L*=2wh|1EYXW?_36TR?L!sUItJVxaC0$Gb|gq4{|4gA z(v0ODFj!T)jc5>65ys)* z7$aBHfbKdz@QJq1b`NT`344*g()$>5*Ey`TPB7WI;|_8o8t9-_4ikFub|I{66>ge> zHA+6onzFKY*eaiA!77SD*^&LyumAR6gSvxY6Q?;!AvI{rZ##!G$%ZfIgce4F`aF;e z?jVh%+B-vj69ei~bh_zA9w}S4B4rzRKQ1~u$gwVu_x5PlRKDXX2(_2Mm7fs%6{SS7Qh1gWT8xaxc=f8`mW38ukIZxwU;lmHABwFSg50*o zrj%f%j~IKR?N5Dxwrq|sTa?!pd{b3sFM&~{4~_^YH4$bI^Fq2W4-y`))^|7fS?i0) zJ&Z9wY!8%l7@gAr`2{fqA;L;ptQR*X2|xUtrT47KK%XN+dydN$*M?65LuXTRabgERR{n>;E;(&vS0_@COY!p<%5LsRqGpER%~YjkSK zwBo9-2|-ZFiU3TT&S+@}3gDT35t0IXTzX@yHA(v>Y8;-mZNySQ&fE7RJ1^tzJfvdApX& z*!+tE)Y{oR%jk8A)3EiI3i*(TOwP!;B3hAOj?KQ6^h-q~1V^166uYS~mH*2Hh*0}r z`R3u1#^LG9IW|^QT^|61H(T1Jz?n;(Z>52lU0BO>Q6*zgpP*gTFk2Uw)!3zt>3F~_ ztil4!R*-j}wjh%&(kSB%}X=u4RbFRp@^l+$SmM@nW9B;yGbf@nasjFMEE{m9Oe

}qal5$moSACwfNXLXG5|3R0AtBcN` z?%yS)&>O>sqxU64U~C3&Q^>z-Zt}WuX4Wh3dKj9EO zfSbV!c3e;EOeKHQmWEw#NM4;*tw-2o@x&kKT?rsmy-F|$jw-F>WgA7?C@{O1qPg*J zf92|RTBMh&ptHADFc{T+cB?+mOj>h2HKgwkxq6w&XBxPc?>=JKvU2K9aU93@vp-R% z{5T=P$9U}AYZ5QU{3%7}YZ+ACWXw#-U zWyxU(OP#Q9-2AeGmCwcp`zWghf2hvsOjWjDQbU?U`v0&a--f1`v0Bd8HLiLmo)PKz5!A1|XVO+89 zm3h2~6yI~cpWor!_yt-?Lt>z`c0a7cJAW)#d8N8nNIf0H<+v;s4{0guDD(?T7Z<~$ zd`$vpZ_QQgFaMT0_d5&+(jwGU?M1FqUu6wjA-9z?mRM}(CmSdK;2e$Na}F-8jbhgN z9)@AIQeghf{xCC^{9P%VdYW1PP#}2BJwWt z0Hd8%st1NK5%h+)UB^mVwh{e#8TIm$xxgGo6I5;e{~VUeeMGRpM_Z%=eH5$X1}?Z5 z`|*_Vp~K&ziz45-Ih9y>EOr(Buy0&n$dbQ4$5eSr=Ti z#~7^n8dmem;$0D4+6eV7&G2D~d@ z+R#u8+nw_N%7_U_1e53P?~&10^m|ZUXrZhVp04lQLsGos%0fRDhS=@>8TOAAxK;Cy z9GZw_1pfSxD5~xoR!INI?tU0wrKDd6^Tv{jL>`Xb49kBaNPlhMaIfh_nq_)zB7NcX z05XeQKz`@BDUx7*i!V~%dc8XQ#ngBw0A2tSr(npSCrNy5Z7>48v&Zz?0{%FRElh_h zN2|?#EhJL5HQMIu6m1=ypTR?tVymHK)xQvS9ir7FzMp?CjlND39PK`od#GytVhZWp zQ1@>MTE1*Ip>hnXSWa?XbMH#708@j12yPbm`JfcqIgmJepn$5YgkJn_%5I)mr`Q(k z-a0yFR3A`houhvf&|wNpIsV{2p%MqhR@`@R(l6`}iufEgI*UxWq~26?WTpZCV{JtG zYL?&#I98fyf_;2S0?_V{=Aa4t^x%vy$pF$_Lh7W2f*~5uPvGYh;vZhMv|u+Z?2t0~ zcYPXdxbg6OS*LUjR_=jLDt)ab6;?g1IuySLG@UE;jLpt-wjLX&RlY>fnd@f&?0NyT zht5vhP^};k6`U76$%&I)iWPNxG6KPjdh`S6>g9GN@;KObQsLG zKyjfrPR0PU1B0a0=)3@9eCDl?mB9rFdlTMtTAeZv2}F*|@JWleq2+H1bt>>x!^wTk z+I)cgsZwzCMwoRpW_*!3IySTQu!`HWugAXe(Ai(a9Rsu;*0#o6torxwNMxPzEAjt` z>70Vw;HCQ?AnP`RKQ;2R8h%;LI#tx^(MO*lMWJe4_?)Q571P`kTmN#(ez21V!<6+S z@Uap+y%#8&cGgdf+E@y$dUx3g#)=#5k31Vqv0p!%L`*=-PiQAiSg-d9lKRZQDuJ-| zA96zwwomG+4}X$vR*IU=NC!vL<`rUTbf_uRJC4FS;k&HtV<=<)p(qymH)=MDV^aqK z#%sid7K|~!H`J!7hRr~Z!emxgWq6#GpQs%c#BM+scvNGz|Gi4G`;8Z~dP8)+51iB8 zw)0fazNz5(iK$LJeC_4e^8&@wT(DZ~~>SStz3P(>V8CLNlZqgv=2K-|Lu~si@XFwMN>QE^k zVS2U_A?Q$?M`NkU}^!M8m%O&T=kW>dG}1s2I~hxp9Y=a=1XX-(fB5) zej3`e5Et~R^r%?CZK0)UZsF_+tSOGIBMdrtMf#oJjGF9U`*P8t>i*TWed$Z2WNUZ* z_1Qw4Yr+Q0@bD?hD0P-^v}?FpPBg~zz5~g@J#J76C695|P>1l;OS8%~hZh5&-9Ji# z50%&56ZK4FC9}{jHL0!=qo9Yd(GGHCEX2|-F(f}q6@NMT4P3rQd{Q!=bz-8N(Z^!N;;ZzAWRf@C?X>mG=_NgyQX_?Jv$m(9$W>P;+e}O|&w&DjbsJPdWp0A2$yLr*!BY73Z z5d*BCaTI)w=sTlofc>n}@v_tSXIK?8(g`G_06u>SD*fOZJ~visq3lBVS2+cf-r$UQ zZ(8A0g&5M$IV7w5nqL(m$VS0X?=yy-e6>S>Ca3wZNT)b{GF39_gJdONflqc-j$b~o z2l@@h{$KVfC)V?#We*)@xYC;L^<@cHo>8axRMbSzw|eYTl|8pkabsQJ(3`z{>5H}c z`psz_Y6t)hvzL^=}P#++XUl6v`-j)SuXd6BynjNZ!&c2hnyE&4*K$nXn31Zk)cm+lx;> zya{T?{MRtSu?^3Y9bS&O$*mW^vRUpv!J3Tz12?3&Y62b_oiZ$24O(75Z)JWb+Rj)ACbK`f<&tSwtT$|Sy z$41kRPiM-jnPY9PKrLyI`pHm6LusMsrO*HpmE){Kp1^u2t%6nW^;GB|!4k!Ik8oav zjM?DBKh9G@W0gEwiU-M}0B)}olvoM71RccgiZBCs)L?q_GX&JDhegx4k2&cNatr5w zU)1#2USb8&`etO5Vk z?0}K+*2*@a5yt*X{qg0@8jEz~jcylVj>-042p1PBnabI#xUiCRD!ouw3?u-wwsqwF z8(@m8-Lk7q@v154g6yvx_tRDa>}oqpVda)wfI9(;ZVGt1v^{<|X?vC_(i@IJC+2I_lusrT=$h zF1lPc*Neb`;Xgrdf`p$w)~MzQW0M3_FYRKu{2$VU82J^B=X1#^<&P$_`=S$Ey04WU zTxG;hrFNLhWC*p+sH3x=JVcBJ9*7>eO20)n671SxQhZQlHMRP8FyO}yai~OTsbms0 zQ3b$C1Cn!>jMHDq{VX1ab^~_Q!z+f75+_AuwiN0*wA_#M#0|rU{+NlB%>Y+TNT0Gj z`3^LKMSJjz2(?lwg~ixDl_5%rzzZ}o_6Fj9e)T7gpH4=BgT1zmwJpC@g(f%&0`}8B z%7Y&qlP3aFmI#nmT`|R3+Lwzp+PLXt|5g%vlY_$fvse7zjus0D0fA##r+i4G4K-2Y zC#H95NGoYfWP#ZF_v$^Li{PZpm}fc&)aL?5doPcb835Cr6`T+EzzcEvLtmXcbAb<^ zw!_Zgk6Az7YA@*vb)(G{_W-B|zrf76z^`X%jOgqIIaqi~5nUup3vugzzg&rA^w(zR z+qCzvIV~nGR=47pDOcNTzuBw#5a=<=DMvGa)g zPw$^pmq9Fg&b#BZrPSoml(149rZS!fioV*Dy$z440U3MXDJmI?RZqLy0}IKSxN)o( z8+8wIZs#q(|KTg6y;Z(=96>xfpUsr@SP}I^v zN^R;ZVrDaWmNrM5-<X@k6JyjvA3;jHhma|Y|7!Vk& zgf(UK_6~cC;!|b!YTjke=nBiUqQdb#I9TY}!s5P)H+^c;9cW(QO8O%n5J^8Xfktd*qrn)+?-gP`m%B&q zi^}7jKm`yMW8ITFOMN#!QIB6$SWx*75tnCMaNg*_J*WuwBh~AT>0($nS8%&zmFQDp z$dL65niDtTV%!Kg1`6epWoQGNG`$`doy;Zjaa`keyL0F6iJMae6FIgnhAfzU%m@V+ zm5rQihLwS~b6{-bVR1ZSzBI7(Yj+V6T-8V*7I`ptWArGdy~8pnV>fALpi~NQLZ7;^ zpaj35=md<~-(tNmF69UX3?ua}A7UIn)q5i1iPYEGlhYSbkfeX`5epkxtzk3Qbu| zlgA`7ts%IvF4HJ}-98akyRnjCo{u-`A4&b+r?s|o`4wdYAHs-yh91p$7C_|+EdYH5 z10`!*=n+W9g>V&dfU1H!J}ASZi&-?`2IlDOAHnu306rD`y>jT)4^@S(X4XhN2{g9i zj-ym98+RT|d0ejIFJCM5>S{mT-8uGmRRqkJ3sMO_AQDrv77Q zv$t>zaVpVF6eBguE%9M2u?E-Oleft8z5+~W`G}KXD(Yc;7m4{Op>Le(k`g1UK7(1# zt6g}$n=Tdn{T4pu>v!c;xRCd_WI$Ali13x=U_0T!Ga-U~9W88q-lU+RLn2`N8Ouho z^0@SvC>$DguHWx)?^*ms-{PVq%dn(U3vrLj9zITDqQZ`H>Wsp@Gf%}SG=m)Vh}F$ztQAbwVGdDgd!28j&yX9wLW&s! zNR~6`nYg;ULAq8zi<;gUchAV5ib67Y##l2 zy+%gaD(|~G4@||{A;TYDSoS>q2o{t23t-^!NDSDEm8j3ao7Ei>KYLEpb$jz}7ciAM zD}trDN+AVVT_lXW<++~>8>Cj8fzJo@R;>%nGq)6+w?(#mNc#1J4W+!hA}?g$0Xqo? zn67qJmss)e%k(xO*&K@z6+}nHA(lCkb6n-|{pSztys$8HiOWTVR)tCO*Q9~if%3n7`uxGzE+OCu zwcVV|tgQdq60952$>85-GHk$lwM(uI+CU1?i{sVnKd0+UNq#eSSKjUKfDDgLnBG1y z^v?f#MRFkph~TgkoKBvM`L_~we8__xpLcjh`GwV|87q`vazJq?SX=mXhdvK>VqUf~ z4sYoTIpt5S)KrE-?>&=cRoBumD7;b5pq!Y07)#I$`)<@U+mo*dE*P~773p*u^6waO z2#thJahX_ySlYMpjx%h<)i43ao~Is`^Ya zMNZkuChEA7+ZJe6$>-C*dzTYf3#1SY82yFG?S&Q)5rTbKS-XLjckTLEc7>^sFcntQ zBeNXCSg&q1N3Bi^4zlQ%mcEBQ%2ab$?(;t-$HYd2%cnX$uuwU#I_6D3($m zR(>gHzM9ODf;r8b0l5LuEIQVZiQ0-|3Y_xzJkZc*CD=bPJ+&J+>>se%D4uTq?Ny{l z0Z5~og*Wa1O&anlcRWu_%o)(x?IZ0CfUNk_R-ik>GyvdFmpu1wHZaKTDGhL zqxsji)n<+)VKbV0_BRq9E;Kb`f=&vn(BK0Ba-gL?ZN;^^b3YFg6R=!q#zM;tcX0dM zdy5PPx@6pJPXHzH7$dGjM|6@6777nXPWV;CIQdNf(*Znv)sMy&Xcq> zhCq+6h6&v8<0}vd2(sKqU3j>fr7&#Xy%qZHcMU3m{wld^Nstkz8GagB?Y=SI&H z&{&BSA-|(i35$9(l6LpFyLm$0M0fK`Dz!~ezL?yEInsXAFR!bHe;ZL>Gd(#Hv?<$%`^b)oi?x%(jkylCPb=juPlF znMo&o961=NZ_$gd{xp1ZY2dNDOS!=XVj!M^A z+$z`EK4v=m{Bs{&I4W)({`&<5*^BV#z{IBAI_d+9Qx;~ zby?2zEjzUUeZWBDo5cz>%;z||z)<+6UtC)y60yD5J5`oo_zSM;l21@CY<0_|)NME5 zs)kHCMBa5YzB#N=W2aR?y9((~WuYwwf+HAc2mvU>NYlxOTvGf^Ye3za?*f-qUs^`a zT3>RPh9*Jf%3*bf|kqtnD_Buxv!<9N>BbuD#uYv-q^ z%RDnd7a3O4M9Y~TNISS@9K}JDkdg@>x8E6@n8jF=6qiDV+}{!V)(o?ykcr0sxBGEx zo!X;pc=r{H^vw6ztV5VZXBa4~(ujB$rZQ|AaGN@J7#q%2nU9gJ)g6dcj}zYB1& z@iFE0vMQVxa|v7tDHS$gwX$Ihc#M^DXRC>J@Zk?dC(3uB_s~*W&m-01DFMQGWjj5x z5po1@1gPl!v1Yra@qPG{D;$bYLM3qOwpl~7f~l)#n< zP+6`!NYe3EE~4RFR#_e=7YctPRBt6$He@`%e5m}f$M%yzC2S0<1}hRPjO>HJY~ z*dx(nbMbjv*;o&k{qzBdF|lS;UNVKziV=gbLq}UOCwr8GT5E9oRYQ}+>DhbQ1R=lj zgcNJN8|D)$Mx3#c+t@lhqcDUnHGVt0&EyQ{b5)=52B(VTzw=pQ^ba3`JB@BU^lS`_ zJEiLzgU#Acd_!}FMxCWC**FP^i#P}bYzNs78)#uSejEtYLbG>JJ7Igtho2oKQ;XW~ z4eMGO+t!_;G^V6c&R`5Tg+Pz2ToN(aybq4Q0ssie_{`t*DO%V7FaZ`{MBobFc9|pV z70o5ayHGJo9$$&Pgbs)pWNzduAcbh?~U?_P)(ve0S*3H%eNF&a5XR=!J#4c z;t992n7ZJr{*%`^dU1d-ALE8!3i#v;3r4r%j+JFCe=%3Vj=8{aXe zs)jrcUBZ=;LudcTUXj2ub>K5!{HHFHJ}Trx(PYugbQ8yK7&sqX;(;|UWjk3tGs3zuceeX)i4i_jA8Qz2Bc%DxN8 zXw!$+9jBtEHd1y90bYG4f8DcJM)Ab!M39tH5zz94*MAvnhA377@buNupSOUU3j8~> zd6&hk^ENRCp9T?_QUHk<=(&9Q^MJ^pi;nKOYNR@?L=RCSmKMJ5UQJQ`X!i~(gD*P! zs`RobzJG3Ra_Pg+WZUXUmMU$ilpwfcEti6)mw(~MZ0q!^sza>#jv!-+7B6F3QuMWg zVO!rXwD+lF1BBTito?ml-CV3vxuek~TKuOX^N6sol$v*{_%nAuD7i81eXm^Lz(Z~I z2Xj_Dts#G0&C;PV_Wkq*1QvB7+Post4={v;gk7b9u%#DC_bh(iJm$rqog^{JEx6NE zrs5^2SEL$|98#2WV#iG@L6cq|)SuTMSfGocPl65wUd^|5Lbpnb(;t>-Qu2jvANLgv zdte0vED-3C@^BdyHWLL(7{G$WA02z@JG!T-U^Q7HZ(7Bs&vchkh(p&}KvnS{MG^i6 z4r){gJp9p7WyWOEiKA2Cm6EXIn&&gk|Fc6^78OpPrX4ExCFE=SD$xcH;C2eB^{XTI zaxz_Cef*Yj==w_i_BTGXP;8C&f? z*QEM>={jFM8)lWAR870pG4XEWsl%%K|82S5b=9hVz7p_6i-d(Iyvq76&a#PV zR;VbQV|n?mg}&(ehClg%tK%IjgtnTR-u)lxH06XxXqH0soAZbB_Rm)XX=6Nge1uoG7 z9vQM_S~2h53n|W`y{{R9+=08rv~MohI_v4-BU^7fZ0-A}#b5{AOSTJm+(J;9yw%pD zX6u62GJ&@HKX5zQwq~j8T!Hrv-Mk^QSB5cu09L03{ToDO7jikM0WAcsjW>D}^jqCF zT0DEZ@K^KO_MD*%M!+V)lGVU6?LpX)eQVXEmq}R`NIJv;kBitJ!nW?0OxTVlu2ADf zE{A!*0g3%nwVcBD+AgT5bGx@WOnQk{zRpiZ4HhP`3BF%N|HdqPbbiV5)7x)kzC3ID zZ;27>0^mrMgWc7evsbQY`l`l})wr+e;=8U_!2&B77;1qL!N8y)eTJ2lf#CvhR~!Qa mc;sM|90DP5A*JW%f2r=u1xt!e4gwD_V(@hJb6Mw<&;$SznOm^{ literal 0 HcmV?d00001 diff --git a/packages/azure_logs/manifest.yml b/packages/azure_logs/manifest.yml new file mode 100644 index 000000000000..2b8b21ff8dc7 --- /dev/null +++ b/packages/azure_logs/manifest.yml @@ -0,0 +1,105 @@ +format_version: 3.3.0 +name: azure_logs +title: "Custom Azure Logs Input" +version: 0.1.0+build0002 +source: + license: "Apache-2.0" +description: "Collect log events from Azure Event Hubs with Elastic Agent" +type: input +categories: + - azure + - custom +conditions: + kibana: + version: "^8.13.0" + elastic: + subscription: "basic" +screenshots: + - src: /img/sample-screenshot.png + title: Sample screenshot + size: 600x600 + type: image/png +icons: + - src: /img/sample-logo.svg + title: Sample logo + size: 32x32 + type: image/svg+xml +policy_templates: + - name: azure-logs + type: logs + title: Azure Logs + description: Collect Azure logs from Event Hub + input: azure-eventhub + template_path: input.yml.hbs + vars: + - name: eventhub + type: text + title: Event Hub Name + multi: false + required: true + show_user: true + description: >- + The event hub name that contains the logs to ingest. + Do not use the event hub namespace here. Elastic + recommends using one event hub for each integration. + Visit [Create an event hub](https://docs.elastic.co/integrations/azure#create-an-event-hub) + to learn more. Use event hub names up to 30 characters long + to avoid compatibility issues. + - name: consumer_group + type: text + title: Consumer Group + multi: false + required: true + show_user: true + default: $Default + - name: connection_string + type: password + secret: true + title: Connection String + multi: false + required: true + show_user: true + description: >- + The connection string required to communicate with Event Hubs. + See [Get an Event Hubs connection string](https://docs.microsoft.com/en-us/azure/event-hubs/event-hubs-get-connection-string) + to learn more. + - name: storage_account + type: text + title: Storage Account + multi: false + required: true + show_user: true + description: >- + The name of the storage account where the consumer group's state/offsets + will be stored and updated. + - name: storage_account_key + type: password + secret: true + title: Storage Account Key + multi: false + required: true + show_user: true + description: >- + The storage account key, this key will be used to authorize access to + data in your storage account. + - name: data_stream.dataset + type: text + title: Dataset name + description: >- + Dataset to write data to. Changing the dataset will send the data to a different index. + You can't use `-` in the name of a dataset and only valid characters for + [Elasticsearch index names](https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-index_.html). + default: azure_logs.generic + required: true + show_user: true + - name: resource_manager_endpoint + type: text + title: Resource Manager Endpoint + description: >- + The Azure Resource Manager endpoint to use for authentication. + multi: false + required: false + show_user: false +owner: + github: elastic/obs-ds-hosted-services + type: elastic From f18832193c573809d3e91bf1ecb6352ce5a7383a Mon Sep 17 00:00:00 2001 From: Maurizio Branca Date: Mon, 28 Oct 2024 19:25:53 +0100 Subject: [PATCH 02/19] Add azure_logs to CODEOWNERS --- .github/CODEOWNERS | 1 + 1 file changed, 1 insertion(+) diff --git a/.github/CODEOWNERS b/.github/CODEOWNERS index 9ad7aa51598c..d2e133e99698 100644 --- a/.github/CODEOWNERS +++ b/.github/CODEOWNERS @@ -99,6 +99,7 @@ /packages/azure_functions @elastic/obs-infraobs-integrations /packages/azure_functions/data_stream/functionapplogs @elastic/obs-infraobs-integrations /packages/azure_functions/data_stream/metrics @elastic/obs-infraobs-integrations +/packages/azure_logs @elastic/security-service-integrations /packages/azure_metrics @elastic/obs-ds-hosted-services /packages/azure_metrics/data_stream/compute_vm @elastic/obs-ds-hosted-services /packages/azure_metrics/data_stream/compute_vm_scaleset @elastic/obs-ds-hosted-services From a892a79c25037dfefba0972ab3d8017a24716882 Mon Sep 17 00:00:00 2001 From: Maurizio Branca Date: Mon, 28 Oct 2024 19:48:04 +0100 Subject: [PATCH 03/19] Fix owner Death by 1000 copy & paste --- .github/CODEOWNERS | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/.github/CODEOWNERS b/.github/CODEOWNERS index d2e133e99698..c1dd7d7b945a 100644 --- a/.github/CODEOWNERS +++ b/.github/CODEOWNERS @@ -99,7 +99,7 @@ /packages/azure_functions @elastic/obs-infraobs-integrations /packages/azure_functions/data_stream/functionapplogs @elastic/obs-infraobs-integrations /packages/azure_functions/data_stream/metrics @elastic/obs-infraobs-integrations -/packages/azure_logs @elastic/security-service-integrations +/packages/azure_logs @elastic/obs-ds-hosted-services /packages/azure_metrics @elastic/obs-ds-hosted-services /packages/azure_metrics/data_stream/compute_vm @elastic/obs-ds-hosted-services /packages/azure_metrics/data_stream/compute_vm_scaleset @elastic/obs-ds-hosted-services From 5718f2cd28bfb0a1ac8ec6f86c8d2c148dab22ca Mon Sep 17 00:00:00 2001 From: Maurizio Branca Date: Mon, 28 Oct 2024 19:54:30 +0100 Subject: [PATCH 04/19] Switch license to Elastic-2.0 It seems all other integrations use Elastic-2.0 --- packages/azure_logs/LICENSE.txt | 293 ++++++++++--------------------- packages/azure_logs/manifest.yml | 2 +- 2 files changed, 93 insertions(+), 202 deletions(-) diff --git a/packages/azure_logs/LICENSE.txt b/packages/azure_logs/LICENSE.txt index d64569567334..809108b857ff 100644 --- a/packages/azure_logs/LICENSE.txt +++ b/packages/azure_logs/LICENSE.txt @@ -1,202 +1,93 @@ +Elastic License 2.0 - Apache License - Version 2.0, January 2004 - http://www.apache.org/licenses/ - - TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION - - 1. Definitions. - - "License" shall mean the terms and conditions for use, reproduction, - and distribution as defined by Sections 1 through 9 of this document. - - "Licensor" shall mean the copyright owner or entity authorized by - the copyright owner that is granting the License. - - "Legal Entity" shall mean the union of the acting entity and all - other entities that control, are controlled by, or are under common - control with that entity. For the purposes of this definition, - "control" means (i) the power, direct or indirect, to cause the - direction or management of such entity, whether by contract or - otherwise, or (ii) ownership of fifty percent (50%) or more of the - outstanding shares, or (iii) beneficial ownership of such entity. - - "You" (or "Your") shall mean an individual or Legal Entity - exercising permissions granted by this License. - - "Source" form shall mean the preferred form for making modifications, - including but not limited to software source code, documentation - source, and configuration files. - - "Object" form shall mean any form resulting from mechanical - transformation or translation of a Source form, including but - not limited to compiled object code, generated documentation, - and conversions to other media types. - - "Work" shall mean the work of authorship, whether in Source or - Object form, made available under the License, as indicated by a - copyright notice that is included in or attached to the work - (an example is provided in the Appendix below). - - "Derivative Works" shall mean any work, whether in Source or Object - form, that is based on (or derived from) the Work and for which the - editorial revisions, annotations, elaborations, or other modifications - represent, as a whole, an original work of authorship. For the purposes - of this License, Derivative Works shall not include works that remain - separable from, or merely link (or bind by name) to the interfaces of, - the Work and Derivative Works thereof. - - "Contribution" shall mean any work of authorship, including - the original version of the Work and any modifications or additions - to that Work or Derivative Works thereof, that is intentionally - submitted to Licensor for inclusion in the Work by the copyright owner - or by an individual or Legal Entity authorized to submit on behalf of - the copyright owner. For the purposes of this definition, "submitted" - means any form of electronic, verbal, or written communication sent - to the Licensor or its representatives, including but not limited to - communication on electronic mailing lists, source code control systems, - and issue tracking systems that are managed by, or on behalf of, the - Licensor for the purpose of discussing and improving the Work, but - excluding communication that is conspicuously marked or otherwise - designated in writing by the copyright owner as "Not a Contribution." - - "Contributor" shall mean Licensor and any individual or Legal Entity - on behalf of whom a Contribution has been received by Licensor and - subsequently incorporated within the Work. - - 2. Grant of Copyright License. Subject to the terms and conditions of - this License, each Contributor hereby grants to You a perpetual, - worldwide, non-exclusive, no-charge, royalty-free, irrevocable - copyright license to reproduce, prepare Derivative Works of, - publicly display, publicly perform, sublicense, and distribute the - Work and such Derivative Works in Source or Object form. - - 3. Grant of Patent License. Subject to the terms and conditions of - this License, each Contributor hereby grants to You a perpetual, - worldwide, non-exclusive, no-charge, royalty-free, irrevocable - (except as stated in this section) patent license to make, have made, - use, offer to sell, sell, import, and otherwise transfer the Work, - where such license applies only to those patent claims licensable - by such Contributor that are necessarily infringed by their - Contribution(s) alone or by combination of their Contribution(s) - with the Work to which such Contribution(s) was submitted. If You - institute patent litigation against any entity (including a - cross-claim or counterclaim in a lawsuit) alleging that the Work - or a Contribution incorporated within the Work constitutes direct - or contributory patent infringement, then any patent licenses - granted to You under this License for that Work shall terminate - as of the date such litigation is filed. - - 4. Redistribution. You may reproduce and distribute copies of the - Work or Derivative Works thereof in any medium, with or without - modifications, and in Source or Object form, provided that You - meet the following conditions: - - (a) You must give any other recipients of the Work or - Derivative Works a copy of this License; and - - (b) You must cause any modified files to carry prominent notices - stating that You changed the files; and - - (c) You must retain, in the Source form of any Derivative Works - that You distribute, all copyright, patent, trademark, and - attribution notices from the Source form of the Work, - excluding those notices that do not pertain to any part of - the Derivative Works; and - - (d) If the Work includes a "NOTICE" text file as part of its - distribution, then any Derivative Works that You distribute must - include a readable copy of the attribution notices contained - within such NOTICE file, excluding those notices that do not - pertain to any part of the Derivative Works, in at least one - of the following places: within a NOTICE text file distributed - as part of the Derivative Works; within the Source form or - documentation, if provided along with the Derivative Works; or, - within a display generated by the Derivative Works, if and - wherever such third-party notices normally appear. The contents - of the NOTICE file are for informational purposes only and - do not modify the License. You may add Your own attribution - notices within Derivative Works that You distribute, alongside - or as an addendum to the NOTICE text from the Work, provided - that such additional attribution notices cannot be construed - as modifying the License. - - You may add Your own copyright statement to Your modifications and - may provide additional or different license terms and conditions - for use, reproduction, or distribution of Your modifications, or - for any such Derivative Works as a whole, provided Your use, - reproduction, and distribution of the Work otherwise complies with - the conditions stated in this License. - - 5. Submission of Contributions. Unless You explicitly state otherwise, - any Contribution intentionally submitted for inclusion in the Work - by You to the Licensor shall be under the terms and conditions of - this License, without any additional terms or conditions. - Notwithstanding the above, nothing herein shall supersede or modify - the terms of any separate license agreement you may have executed - with Licensor regarding such Contributions. - - 6. Trademarks. This License does not grant permission to use the trade - names, trademarks, service marks, or product names of the Licensor, - except as required for reasonable and customary use in describing the - origin of the Work and reproducing the content of the NOTICE file. - - 7. Disclaimer of Warranty. Unless required by applicable law or - agreed to in writing, Licensor provides the Work (and each - Contributor provides its Contributions) on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or - implied, including, without limitation, any warranties or conditions - of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A - PARTICULAR PURPOSE. You are solely responsible for determining the - appropriateness of using or redistributing the Work and assume any - risks associated with Your exercise of permissions under this License. - - 8. Limitation of Liability. In no event and under no legal theory, - whether in tort (including negligence), contract, or otherwise, - unless required by applicable law (such as deliberate and grossly - negligent acts) or agreed to in writing, shall any Contributor be - liable to You for damages, including any direct, indirect, special, - incidental, or consequential damages of any character arising as a - result of this License or out of the use or inability to use the - Work (including but not limited to damages for loss of goodwill, - work stoppage, computer failure or malfunction, or any and all - other commercial damages or losses), even if such Contributor - has been advised of the possibility of such damages. - - 9. Accepting Warranty or Additional Liability. While redistributing - the Work or Derivative Works thereof, You may choose to offer, - and charge a fee for, acceptance of support, warranty, indemnity, - or other liability obligations and/or rights consistent with this - License. However, in accepting such obligations, You may act only - on Your own behalf and on Your sole responsibility, not on behalf - of any other Contributor, and only if You agree to indemnify, - defend, and hold each Contributor harmless for any liability - incurred by, or claims asserted against, such Contributor by reason - of your accepting any such warranty or additional liability. - - END OF TERMS AND CONDITIONS - - APPENDIX: How to apply the Apache License to your work. - - To apply the Apache License to your work, attach the following - boilerplate notice, with the fields enclosed by brackets "[]" - replaced with your own identifying information. (Don't include - the brackets!) The text should be enclosed in the appropriate - comment syntax for the file format. We also recommend that a - file or class name and description of purpose be included on the - same "printed page" as the copyright notice for easier - identification within third-party archives. - - Copyright [yyyy] [name of copyright owner] - - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. +URL: https://www.elastic.co/licensing/elastic-license + +## Acceptance + +By using the software, you agree to all of the terms and conditions below. + +## Copyright License + +The licensor grants you a non-exclusive, royalty-free, worldwide, +non-sublicensable, non-transferable license to use, copy, distribute, make +available, and prepare derivative works of the software, in each case subject to +the limitations and conditions below. + +## Limitations + +You may not provide the software to third parties as a hosted or managed +service, where the service provides users with access to any substantial set of +the features or functionality of the software. + +You may not move, change, disable, or circumvent the license key functionality +in the software, and you may not remove or obscure any functionality in the +software that is protected by the license key. + +You may not alter, remove, or obscure any licensing, copyright, or other notices +of the licensor in the software. Any use of the licensor’s trademarks is subject +to applicable law. + +## Patents + +The licensor grants you a license, under any patent claims the licensor can +license, or becomes able to license, to make, have made, use, sell, offer for +sale, import and have imported the software, in each case subject to the +limitations and conditions in this license. This license does not cover any +patent claims that you cause to be infringed by modifications or additions to +the software. If you or your company make any written claim that the software +infringes or contributes to infringement of any patent, your patent license for +the software granted under these terms ends immediately. If your company makes +such a claim, your patent license ends immediately for work on behalf of your +company. + +## Notices + +You must ensure that anyone who gets a copy of any part of the software from you +also gets a copy of these terms. + +If you modify the software, you must include in any modified copies of the +software prominent notices stating that you have modified the software. + +## No Other Rights + +These terms do not imply any licenses other than those expressly granted in +these terms. + +## Termination + +If you use the software in violation of these terms, such use is not licensed, +and your licenses will automatically terminate. If the licensor provides you +with a notice of your violation, and you cease all violation of this license no +later than 30 days after you receive that notice, your licenses will be +reinstated retroactively. However, if you violate these terms after such +reinstatement, any additional violation of these terms will cause your licenses +to terminate automatically and permanently. + +## No Liability + +*As far as the law allows, the software comes as is, without any warranty or +condition, and the licensor will not be liable to you for any damages arising +out of these terms or the use or nature of the software, under any kind of +legal claim.* + +## Definitions + +The **licensor** is the entity offering these terms, and the **software** is the +software the licensor makes available under these terms, including any portion +of it. + +**you** refers to the individual or entity agreeing to these terms. + +**your company** is any legal entity, sole proprietorship, or other kind of +organization that you work for, plus all organizations that have control over, +are under the control of, or are under common control with that +organization. **control** means ownership of substantially all the assets of an +entity, or the power to direct its management and policies by vote, contract, or +otherwise. Control can be direct or indirect. + +**your licenses** are all the licenses granted to you for the software under +these terms. + +**use** means anything you do with the software requiring one of your licenses. + +**trademark** means trademarks, service marks, and similar rights. diff --git a/packages/azure_logs/manifest.yml b/packages/azure_logs/manifest.yml index 2b8b21ff8dc7..50d933f4bd94 100644 --- a/packages/azure_logs/manifest.yml +++ b/packages/azure_logs/manifest.yml @@ -3,7 +3,7 @@ name: azure_logs title: "Custom Azure Logs Input" version: 0.1.0+build0002 source: - license: "Apache-2.0" + license: Elastic-2.0 description: "Collect log events from Azure Event Hubs with Elastic Agent" type: input categories: From 1d586c5a0ceb6372f26c66180d2822fe549a88cf Mon Sep 17 00:00:00 2001 From: Maurizio Branca Date: Tue, 29 Oct 2024 11:41:53 +0100 Subject: [PATCH 05/19] Add pipeline override option With this pipeline override, users can run a custom pipeline only for a specific input package installation. --- packages/azure_logs/agent/input/input.yml.hbs | 3 +++ packages/azure_logs/manifest.yml | 10 +++++++++- 2 files changed, 12 insertions(+), 1 deletion(-) diff --git a/packages/azure_logs/agent/input/input.yml.hbs b/packages/azure_logs/agent/input/input.yml.hbs index 40c0dd217007..3f948b8ebe4c 100644 --- a/packages/azure_logs/agent/input/input.yml.hbs +++ b/packages/azure_logs/agent/input/input.yml.hbs @@ -20,6 +20,9 @@ storage_account: {{storage_account}} {{#if storage_account_key}} storage_account_key: {{storage_account_key}} {{/if}} +{{#if pipeline}} +pipeline: {{pipeline}} +{{/if}} {{#if resource_manager_endpoint}} resource_manager_endpoint: {{resource_manager_endpoint}} {{/if}} diff --git a/packages/azure_logs/manifest.yml b/packages/azure_logs/manifest.yml index 50d933f4bd94..9c66a215c54d 100644 --- a/packages/azure_logs/manifest.yml +++ b/packages/azure_logs/manifest.yml @@ -91,7 +91,15 @@ policy_templates: [Elasticsearch index names](https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-index_.html). default: azure_logs.generic required: true - show_user: true + show_user: true + - name: pipeline + type: text + title: Ingest Pipeline + description: >- + The ingest pipeline ID to use for processing the data. If provided, + replaces the default pipeline for this integration. + required: false + show_user: true - name: resource_manager_endpoint type: text title: Resource Manager Endpoint From 234f06e91518af101e58431c558e46552a2b3e8a Mon Sep 17 00:00:00 2001 From: Maurizio Branca Date: Tue, 29 Oct 2024 11:48:02 +0100 Subject: [PATCH 06/19] Replace stock logo with the standard inputs logo All other input packages seem to use this logo. --- packages/azure_logs/img/icon.svg | 4 ++++ packages/azure_logs/img/sample-logo.svg | 1 - packages/azure_logs/img/sample-screenshot.png | Bin 18849 -> 0 bytes packages/azure_logs/manifest.yml | 11 ++--------- 4 files changed, 6 insertions(+), 10 deletions(-) create mode 100644 packages/azure_logs/img/icon.svg delete mode 100644 packages/azure_logs/img/sample-logo.svg delete mode 100644 packages/azure_logs/img/sample-screenshot.png diff --git a/packages/azure_logs/img/icon.svg b/packages/azure_logs/img/icon.svg new file mode 100644 index 000000000000..173fdec5072e --- /dev/null +++ b/packages/azure_logs/img/icon.svg @@ -0,0 +1,4 @@ + + + + \ No newline at end of file diff --git a/packages/azure_logs/img/sample-logo.svg b/packages/azure_logs/img/sample-logo.svg deleted file mode 100644 index 6268dd88f3b3..000000000000 --- a/packages/azure_logs/img/sample-logo.svg +++ /dev/null @@ -1 +0,0 @@ - \ No newline at end of file diff --git a/packages/azure_logs/img/sample-screenshot.png b/packages/azure_logs/img/sample-screenshot.png deleted file mode 100644 index d7a56a3ecc078c38636698cefba33f86291dd178..0000000000000000000000000000000000000000 GIT binary patch literal 0 HcmV?d00001 literal 18849 zcmeEu^S~#!E#4Tq;}?6chqwB{?k=6jc5D4>l%v(rleJ2Y%tW zDj9g7px}|*e;{M?LDwiK3@FNS(lDRTd-MJYIyUJCN948~OJk1M(DrJyI#iV;P4k~& zFZo35IfQt0RwlUN`48^6(1dv_wm(y1xhEdMld=Y?!%u=fPT_*{3( zwBwz3#qR}_)t>C*jp5@U)Ti~B)Y;qq*TRxZJ7ZRN_^A3TDAEM*@7Ve%(Ro7=1%1B< zVj6GBUTxXev>_^SFA zgKZ=g4aTS}9>Ofj7cSB0WO?gQ)x=+!hs_)b$6#>ScFZ>XAoIX)%Bc|BDC~JFBk0f0 z0NY}6gb)&!qx^FWC(!ji+Kl$V$2|ocA=vN0TM0Y`U?tX+T)c*C zA!IL(T2Vm%MCLa85^if@J@Kkprx8QN5!6eCR@4Oa5S?4-4|ou?90mFCM8D!;n(5xz zO}-*t!TntN>|a$s(kGQg1P-U?hqvGF2_fGvd&~yZ_l3Qf&j~XWa=;>N3#-~#zjzcc z*m18L`A-K2o!d@J>a8SRbm4P&-q1(H>|JgIymDbnJF&@008`=X!P?4DGgZb>voUl^ zNJKgPR4S={)3vuk_{n@=M8q;;aJL>q+VLdTnO=}`&x;1DKjJA3*f*idS{jP5?+;!W zn-^7021Z4zv`Aq`hmX1aid997RNh3fa-@PG(W7TzKa1W&5^y3|lPeETP7j9qXpo4)7%(W0_2 z^Nmq;t@rb1eP3?%kOkH`P%!zTC7ZHjSfNN3*Sb#=3#jB*KpNGNfnRZ{N(6DrW(;B2Bwom<%m?VQP%K+ zsFeF1-(DY}oP@)w^Kw~gPg03q?N;)Ec6^|nikA34T~RynX*z}H>R~qgT$`Zbhn8wzZs$j2fsGN&rOK-mIBBvzD@a8FgbLpL!h5N^u&0wG} zq!#md3MHITv?3@$37J?lc_5*LWJTTjel;IiU-Yq;(g9I^D&KN_NKVS0O~GvB~FzPM6}=4d%fG4Nw4pZshcyLqK@`b8?RhD38haIyr@+8+0r5TC1*C7^WleJ zZN3_ngTD#RQvNL*;qD2H@cBWJbCC#d!}=oKfod5SE9a?!?j%DVt1z@inN}Iy$r+96 zM@P?AC+(`cM;z6J94BYGJ;+P-N#yj$?`G26ydS&OVH?~JY(N4l()Fh+x+DoJ@r<+i zhm^ck@QP`=fLApr62@KyOef~}zuG;(VbDQmw|Wb+oSHSw=%w9R)=et0cY*~ytX)#M zEXlK^p;zM@vTnXn+C1vwP)~TJv|TvDE2($;;EzC5_5IL#H;u z)#CO8)TSzbt8)wHB8$I8KcIojx&GoE)3QNu{CQ+_xBmQ&`mL5-u=BX(hs^hMY^ zae!!*Q;Tr$@(0~GoBJAohGw*d{l8~!aXop87aaSUb2jm)Tk>#$1*cdo5Sl+?oD!l4Og~yX+soottl4 zp4OartUuAN(dD~yLJ}`A1*!D4-|L^hM;`_DM^1KYs-VF(}h(BjRO``b+xV~%O=-)?p z7ciJH7Fnl?V&=ay_AB{oQoa2iR;6$^tiE|-eRCFy|3F@%j#6gUxkZX@?K`F$u#;T< z4IZORpUthmB?U`;zrOkp?P(Rvd5TFRWrBJmVg;KEZvJ+;Q}FRY%QZ?c^&$oPXW+C5 zdN#c>v%U?QuE+hMQdzxS1Q(BT90;29qu#^A?a^)Ui;{TJ;%`nLgm2ew$J4NvREjCJ z$`C7&?tH$CrVG@M3J1-KJw_*9BKeL*JX{ zN+Vg_TXb9^jJO$ZGkXO6BBFDjt~w5`w2TB*z$&1W5Il3IiDs=ZMDt|9iRtKET*wF6 z0Z+|N87p-5Fh)^(*l>OVr5^aY5LW(@PuM>Qo@&)yj6XRkPm1>eTF#Y_c*aRF^ZY5A z9FAU7lKEHG@i{wJMPg;n6z2|69d-)q9@<7t()d-zPy&X zdXG7{Uw{k23)CzzQAXw#iqj<1u~W@K_Ljc#?ukh;fRKHeJ2l~Z+52b2n^bGiDF2oX zm25FLx|4AP8>rAi@koY03lrtS#X?zK591c?2iZ_jjc>0y>q9>fU<08o6zG%z9WK+S zDwZMW4~28wu#ye#V*@#5t^S@NiAA`3{SF$xINmc_WW^u-C9M=H>RQ1>WM=|R!660{ z6E6%DwX`eu<3pkmz7Z=FCRd$(vhDkc3yMnSr)5C*aho)DZ<12$`$TXj<8Z70)|rK7 zXFD8QzksfWZU`qL2K8X{C~TcF{KVW`3Y{IMb&)T9%1V`tv(HY1 z+LXkLyM|3mtLD{x-#hOw-U?sr-iLeHFA|=-sGZ4#hX)atL!a91(tWJc+og&5W}VfZ zpgE7`{5D`~?yGR++y7~xA&eU0N*ZezDjF$> zUeK&1aTFQRg*?v^Z2e7u<`lk$czR6}b6Cl-qA9%A`#A6q0*zyTu)X`3rhjR86NK3= zLdw{+-F}+b2gxd-qF7>Rla}dFkj|L#c|pg5Ni+MRA|BZH(@ME*o<1ijKcoXb%PVfJ ztp_uf=G%kvU((pHcw90Xut=}atA!giM-5By)f40nKp zv7Wdb{;^<}VRvruH~rYr~wEuYY2ov-5Q|p@u3Da9+z7PeIpBAwi?RxnxN3Kt+N9L(LUS%wxY` z>e&1VV;{CYw8DNRlvBH)>!I49SU4R!t3I4=y;mCevPZh!-}~G+F>6hcL_Rli4r zC4(WN)`j$>^S=~GMGR=^)A6wrqi(-x{xK37&Vx!OS6t=KQ2JVZo#GrSODtTe=TVh%*qfF%91nqsMNLNL^Gp|_ zz%I*HUkMQGqb!1eh{{bp|0GSCDbkG_D_d)8<(0r<6-%Qi7qDa7xZjcdZ$?Rth9L!f z$erCcs3<~mtupywbaT8NWZF#v?iZkvqSz3@p`RiXs7P!GUa~-U9hEG(NgI#3BzO-# z!9JWf(;r!*A=@g$f}>wi|6Q@9z8AmYf~x8G%sp>C5cfuJY;hs1o3Ozu^{pH0AFbs%yU)Xy5>Cf?qXiHn*-PAfKDRiy`U0sFSKFsgEZ6_ z9#ma!<#Izr^}_z*>PRSt564u6We*XmZUx^jv*dK; z4zyFZ*ZFSE!00<6!|+#33&R)@RA8V9YRjp$HS9?CGq*xDSDRbX#i;}mateEF{fqTI zt?X}Efkq_Ap*_ETgaikOBbQ|;47}hwX44K`(DUI@C)QiG&6UJ1UmRn*Q@6%e`+x(gpQp74O{;yli8YLCV}qD z4gIyZd_(8ED~WWaeXOb0^r=9=AiDT}by~+$KVF~M{ywbQl zng-h?a_E;yX?DCr4|_h7JMc7>xgWf7Ek-VmH^hCYunVp3{(d{---&%-GZ=rK#V5Jo zJvP8b!2AA5?9)G8gwzB6ze3TU<5*Pqms^Q-?C9-CN~4hb-`U0D@kAkTWn23``cao^ z8IWAp8h7`%ZA+eI?w$sJktq5m>e&0@mQn>2BdpKAxbj1$m$8Z;`!iFvl9($Lb9Ff? zT^6cTZ~HgIeR6R*;G(rzpgsJP41Fx9Df;G6{;k6T(i}&8hX(jHSC@~#X@70h#)g(( z*9vUC+a*b%oAdf1$}Z3NR;|c5nY4^Z51pfqk(tmJbB;Q#ka#tf5eae;-kq$I{xO3<(TI$0lSe-JQzJ*es;il=Kn_?&?E zfLbs{qErPqm)-*ZfwbA*D-shgb|1;X;cH*yA|q8gS=HiosF=-kbdk6--SR+`F^H_` z0*i`J==@XSe=HT;_``G}ulE=H@*3GU*?gVd@h*`eT^GKjI;C@8+h~;(u3bA#b&bN{ zYw>dJ$(;RfHDLlndS`CWOE=g0jOocCc&;w(dOzrLf4-DK*MD@P_;u&CbfMw=#Q-B` zDq8hGwKN-O7(hQA_bP3f5XrZH+@*FGw~ppmDgNWcf|Lf*Pc%e5dw1DcJ1BWm!z7z3 zr^toEU*P(>G#;_1X}Rz(5lbDtCui%hY^d3lm)kw0vyk zX~K4$AG#7cG`6s2%9g9zsaQ9o?;3yzW4Pt!;NlS zzI#G7tiq&@eV&}qDtY(e$1JwscAfle%Al{3>Nr%``n?`Jac^CdOXUbFgI3;m{RkA~ zokl+lxuw9=%W&MmzA+G%ZdFMMP&N2^6BWjG2Lt|xKx)lMCR@b0n+xgw<)&Dwi?}>- z+$_e|@M;uW@3z6)q&L7bYitZ%huzGqH_qHOr&G5o!?(8TJv_MN1ka|&c6_!Q>#PgHSFoPWiLg|k_{ zQd#Zy&BPkU(0OE5S35!B5qb6%T3Wd#J(zBl8dw6I#xIDDF-LBPi-jXv1E?!gE|1OIdTejK)+U3ooC^otSIRsWZf-`&K}6}s!407Y58zH zK(oYx*7sN1O|Z_1YIJS_H$E@DH(hB4QKNCGQT3PTvwYoe2&8WKi5`5tU-r4!>_V3XUT}N)>8V;+z-!@-IGCKiD>E9RC(K`NMx=;Qp zf$2g^t?)zpU0L!BZi(oE#)^Z_biT*Svh>r#%1=O+Wo37G`Q)4@k#Pe?^mgBIugC)8 zyEICH=`{A~^x#X&%tr-$j|(nXrIrGQYNY+C3M+LO;yUU4-|v>a5#P)XYp>_|C0f0n{_p0mvwWmghfd%!Cm}$qBDxOqA3htLs~ghSA1>6^dVgd~ zVHHBBy6;Pp=El;dkTE=ttp~BoOJ$L@EB3Z37T1kTNG3tm4PY5O-7hP5DA$-k=vV&6 z?RiAm;W~*o)R7!x9>u$&@|&D4xMmJ*y+^-6t!F0u8G~78t&Bs#W>w_NbW>W9M3tXWXRf zI86FWVx%iXXh6MJ>dg#?lNu{K@S#nzMIG4PXQd%!Bvc*H0c7F_Y=adptJr*cHevMQ z%?Xu~q8CFw>^L*S_83kVhq=)hf0%_Lq}SE*g(Da_A{kXVZfAd*YCwp~bG32wi&SNM z#QZ7}Ug5-=+s^uqAh_|}gzya<(&E?XAZ%0ybd9nraj?|z1YfPr*{N?Q{ji}YG`T#| z=uwJZHIMlsmevnenT#-)t$L*=2wh|1EYXW?_36TR?L!sUItJVxaC0$Gb|gq4{|4gA z(v0ODFj!T)jc5>65ys)* z7$aBHfbKdz@QJq1b`NT`344*g()$>5*Ey`TPB7WI;|_8o8t9-_4ikFub|I{66>ge> zHA+6onzFKY*eaiA!77SD*^&LyumAR6gSvxY6Q?;!AvI{rZ##!G$%ZfIgce4F`aF;e z?jVh%+B-vj69ei~bh_zA9w}S4B4rzRKQ1~u$gwVu_x5PlRKDXX2(_2Mm7fs%6{SS7Qh1gWT8xaxc=f8`mW38ukIZxwU;lmHABwFSg50*o zrj%f%j~IKR?N5Dxwrq|sTa?!pd{b3sFM&~{4~_^YH4$bI^Fq2W4-y`))^|7fS?i0) zJ&Z9wY!8%l7@gAr`2{fqA;L;ptQR*X2|xUtrT47KK%XN+dydN$*M?65LuXTRabgERR{n>;E;(&vS0_@COY!p<%5LsRqGpER%~YjkSK zwBo9-2|-ZFiU3TT&S+@}3gDT35t0IXTzX@yHA(v>Y8;-mZNySQ&fE7RJ1^tzJfvdApX& z*!+tE)Y{oR%jk8A)3EiI3i*(TOwP!;B3hAOj?KQ6^h-q~1V^166uYS~mH*2Hh*0}r z`R3u1#^LG9IW|^QT^|61H(T1Jz?n;(Z>52lU0BO>Q6*zgpP*gTFk2Uw)!3zt>3F~_ ztil4!R*-j}wjh%&(kSB%}X=u4RbFRp@^l+$SmM@nW9B;yGbf@nasjFMEE{m9Oe

}qal5$moSACwfNXLXG5|3R0AtBcN` z?%yS)&>O>sqxU64U~C3&Q^>z-Zt}WuX4Wh3dKj9EO zfSbV!c3e;EOeKHQmWEw#NM4;*tw-2o@x&kKT?rsmy-F|$jw-F>WgA7?C@{O1qPg*J zf92|RTBMh&ptHADFc{T+cB?+mOj>h2HKgwkxq6w&XBxPc?>=JKvU2K9aU93@vp-R% z{5T=P$9U}AYZ5QU{3%7}YZ+ACWXw#-U zWyxU(OP#Q9-2AeGmCwcp`zWghf2hvsOjWjDQbU?U`v0&a--f1`v0Bd8HLiLmo)PKz5!A1|XVO+89 zm3h2~6yI~cpWor!_yt-?Lt>z`c0a7cJAW)#d8N8nNIf0H<+v;s4{0guDD(?T7Z<~$ zd`$vpZ_QQgFaMT0_d5&+(jwGU?M1FqUu6wjA-9z?mRM}(CmSdK;2e$Na}F-8jbhgN z9)@AIQeghf{xCC^{9P%VdYW1PP#}2BJwWt z0Hd8%st1NK5%h+)UB^mVwh{e#8TIm$xxgGo6I5;e{~VUeeMGRpM_Z%=eH5$X1}?Z5 z`|*_Vp~K&ziz45-Ih9y>EOr(Buy0&n$dbQ4$5eSr=Ti z#~7^n8dmem;$0D4+6eV7&G2D~d@ z+R#u8+nw_N%7_U_1e53P?~&10^m|ZUXrZhVp04lQLsGos%0fRDhS=@>8TOAAxK;Cy z9GZw_1pfSxD5~xoR!INI?tU0wrKDd6^Tv{jL>`Xb49kBaNPlhMaIfh_nq_)zB7NcX z05XeQKz`@BDUx7*i!V~%dc8XQ#ngBw0A2tSr(npSCrNy5Z7>48v&Zz?0{%FRElh_h zN2|?#EhJL5HQMIu6m1=ypTR?tVymHK)xQvS9ir7FzMp?CjlND39PK`od#GytVhZWp zQ1@>MTE1*Ip>hnXSWa?XbMH#708@j12yPbm`JfcqIgmJepn$5YgkJn_%5I)mr`Q(k z-a0yFR3A`houhvf&|wNpIsV{2p%MqhR@`@R(l6`}iufEgI*UxWq~26?WTpZCV{JtG zYL?&#I98fyf_;2S0?_V{=Aa4t^x%vy$pF$_Lh7W2f*~5uPvGYh;vZhMv|u+Z?2t0~ zcYPXdxbg6OS*LUjR_=jLDt)ab6;?g1IuySLG@UE;jLpt-wjLX&RlY>fnd@f&?0NyT zht5vhP^};k6`U76$%&I)iWPNxG6KPjdh`S6>g9GN@;KObQsLG zKyjfrPR0PU1B0a0=)3@9eCDl?mB9rFdlTMtTAeZv2}F*|@JWleq2+H1bt>>x!^wTk z+I)cgsZwzCMwoRpW_*!3IySTQu!`HWugAXe(Ai(a9Rsu;*0#o6torxwNMxPzEAjt` z>70Vw;HCQ?AnP`RKQ;2R8h%;LI#tx^(MO*lMWJe4_?)Q571P`kTmN#(ez21V!<6+S z@Uap+y%#8&cGgdf+E@y$dUx3g#)=#5k31Vqv0p!%L`*=-PiQAiSg-d9lKRZQDuJ-| zA96zwwomG+4}X$vR*IU=NC!vL<`rUTbf_uRJC4FS;k&HtV<=<)p(qymH)=MDV^aqK z#%sid7K|~!H`J!7hRr~Z!emxgWq6#GpQs%c#BM+scvNGz|Gi4G`;8Z~dP8)+51iB8 zw)0fazNz5(iK$LJeC_4e^8&@wT(DZ~~>SStz3P(>V8CLNlZqgv=2K-|Lu~si@XFwMN>QE^k zVS2U_A?Q$?M`NkU}^!M8m%O&T=kW>dG}1s2I~hxp9Y=a=1XX-(fB5) zej3`e5Et~R^r%?CZK0)UZsF_+tSOGIBMdrtMf#oJjGF9U`*P8t>i*TWed$Z2WNUZ* z_1Qw4Yr+Q0@bD?hD0P-^v}?FpPBg~zz5~g@J#J76C695|P>1l;OS8%~hZh5&-9Ji# z50%&56ZK4FC9}{jHL0!=qo9Yd(GGHCEX2|-F(f}q6@NMT4P3rQd{Q!=bz-8N(Z^!N;;ZzAWRf@C?X>mG=_NgyQX_?Jv$m(9$W>P;+e}O|&w&DjbsJPdWp0A2$yLr*!BY73Z z5d*BCaTI)w=sTlofc>n}@v_tSXIK?8(g`G_06u>SD*fOZJ~visq3lBVS2+cf-r$UQ zZ(8A0g&5M$IV7w5nqL(m$VS0X?=yy-e6>S>Ca3wZNT)b{GF39_gJdONflqc-j$b~o z2l@@h{$KVfC)V?#We*)@xYC;L^<@cHo>8axRMbSzw|eYTl|8pkabsQJ(3`z{>5H}c z`psz_Y6t)hvzL^=}P#++XUl6v`-j)SuXd6BynjNZ!&c2hnyE&4*K$nXn31Zk)cm+lx;> zya{T?{MRtSu?^3Y9bS&O$*mW^vRUpv!J3Tz12?3&Y62b_oiZ$24O(75Z)JWb+Rj)ACbK`f<&tSwtT$|Sy z$41kRPiM-jnPY9PKrLyI`pHm6LusMsrO*HpmE){Kp1^u2t%6nW^;GB|!4k!Ik8oav zjM?DBKh9G@W0gEwiU-M}0B)}olvoM71RccgiZBCs)L?q_GX&JDhegx4k2&cNatr5w zU)1#2USb8&`etO5Vk z?0}K+*2*@a5yt*X{qg0@8jEz~jcylVj>-042p1PBnabI#xUiCRD!ouw3?u-wwsqwF z8(@m8-Lk7q@v154g6yvx_tRDa>}oqpVda)wfI9(;ZVGt1v^{<|X?vC_(i@IJC+2I_lusrT=$h zF1lPc*Neb`;Xgrdf`p$w)~MzQW0M3_FYRKu{2$VU82J^B=X1#^<&P$_`=S$Ey04WU zTxG;hrFNLhWC*p+sH3x=JVcBJ9*7>eO20)n671SxQhZQlHMRP8FyO}yai~OTsbms0 zQ3b$C1Cn!>jMHDq{VX1ab^~_Q!z+f75+_AuwiN0*wA_#M#0|rU{+NlB%>Y+TNT0Gj z`3^LKMSJjz2(?lwg~ixDl_5%rzzZ}o_6Fj9e)T7gpH4=BgT1zmwJpC@g(f%&0`}8B z%7Y&qlP3aFmI#nmT`|R3+Lwzp+PLXt|5g%vlY_$fvse7zjus0D0fA##r+i4G4K-2Y zC#H95NGoYfWP#ZF_v$^Li{PZpm}fc&)aL?5doPcb835Cr6`T+EzzcEvLtmXcbAb<^ zw!_Zgk6Az7YA@*vb)(G{_W-B|zrf76z^`X%jOgqIIaqi~5nUup3vugzzg&rA^w(zR z+qCzvIV~nGR=47pDOcNTzuBw#5a=<=DMvGa)g zPw$^pmq9Fg&b#BZrPSoml(149rZS!fioV*Dy$z440U3MXDJmI?RZqLy0}IKSxN)o( z8+8wIZs#q(|KTg6y;Z(=96>xfpUsr@SP}I^v zN^R;ZVrDaWmNrM5-<X@k6JyjvA3;jHhma|Y|7!Vk& zgf(UK_6~cC;!|b!YTjke=nBiUqQdb#I9TY}!s5P)H+^c;9cW(QO8O%n5J^8Xfktd*qrn)+?-gP`m%B&q zi^}7jKm`yMW8ITFOMN#!QIB6$SWx*75tnCMaNg*_J*WuwBh~AT>0($nS8%&zmFQDp z$dL65niDtTV%!Kg1`6epWoQGNG`$`doy;Zjaa`keyL0F6iJMae6FIgnhAfzU%m@V+ zm5rQihLwS~b6{-bVR1ZSzBI7(Yj+V6T-8V*7I`ptWArGdy~8pnV>fALpi~NQLZ7;^ zpaj35=md<~-(tNmF69UX3?ua}A7UIn)q5i1iPYEGlhYSbkfeX`5epkxtzk3Qbu| zlgA`7ts%IvF4HJ}-98akyRnjCo{u-`A4&b+r?s|o`4wdYAHs-yh91p$7C_|+EdYH5 z10`!*=n+W9g>V&dfU1H!J}ASZi&-?`2IlDOAHnu306rD`y>jT)4^@S(X4XhN2{g9i zj-ym98+RT|d0ejIFJCM5>S{mT-8uGmRRqkJ3sMO_AQDrv77Q zv$t>zaVpVF6eBguE%9M2u?E-Oleft8z5+~W`G}KXD(Yc;7m4{Op>Le(k`g1UK7(1# zt6g}$n=Tdn{T4pu>v!c;xRCd_WI$Ali13x=U_0T!Ga-U~9W88q-lU+RLn2`N8Ouho z^0@SvC>$DguHWx)?^*ms-{PVq%dn(U3vrLj9zITDqQZ`H>Wsp@Gf%}SG=m)Vh}F$ztQAbwVGdDgd!28j&yX9wLW&s! zNR~6`nYg;ULAq8zi<;gUchAV5ib67Y##l2 zy+%gaD(|~G4@||{A;TYDSoS>q2o{t23t-^!NDSDEm8j3ao7Ei>KYLEpb$jz}7ciAM zD}trDN+AVVT_lXW<++~>8>Cj8fzJo@R;>%nGq)6+w?(#mNc#1J4W+!hA}?g$0Xqo? zn67qJmss)e%k(xO*&K@z6+}nHA(lCkb6n-|{pSztys$8HiOWTVR)tCO*Q9~if%3n7`uxGzE+OCu zwcVV|tgQdq60952$>85-GHk$lwM(uI+CU1?i{sVnKd0+UNq#eSSKjUKfDDgLnBG1y z^v?f#MRFkph~TgkoKBvM`L_~we8__xpLcjh`GwV|87q`vazJq?SX=mXhdvK>VqUf~ z4sYoTIpt5S)KrE-?>&=cRoBumD7;b5pq!Y07)#I$`)<@U+mo*dE*P~773p*u^6waO z2#thJahX_ySlYMpjx%h<)i43ao~Is`^Ya zMNZkuChEA7+ZJe6$>-C*dzTYf3#1SY82yFG?S&Q)5rTbKS-XLjckTLEc7>^sFcntQ zBeNXCSg&q1N3Bi^4zlQ%mcEBQ%2ab$?(;t-$HYd2%cnX$uuwU#I_6D3($m zR(>gHzM9ODf;r8b0l5LuEIQVZiQ0-|3Y_xzJkZc*CD=bPJ+&J+>>se%D4uTq?Ny{l z0Z5~og*Wa1O&anlcRWu_%o)(x?IZ0CfUNk_R-ik>GyvdFmpu1wHZaKTDGhL zqxsji)n<+)VKbV0_BRq9E;Kb`f=&vn(BK0Ba-gL?ZN;^^b3YFg6R=!q#zM;tcX0dM zdy5PPx@6pJPXHzH7$dGjM|6@6777nXPWV;CIQdNf(*Znv)sMy&Xcq> zhCq+6h6&v8<0}vd2(sKqU3j>fr7&#Xy%qZHcMU3m{wld^Nstkz8GagB?Y=SI&H z&{&BSA-|(i35$9(l6LpFyLm$0M0fK`Dz!~ezL?yEInsXAFR!bHe;ZL>Gd(#Hv?<$%`^b)oi?x%(jkylCPb=juPlF znMo&o961=NZ_$gd{xp1ZY2dNDOS!=XVj!M^A z+$z`EK4v=m{Bs{&I4W)({`&<5*^BV#z{IBAI_d+9Qx;~ zby?2zEjzUUeZWBDo5cz>%;z||z)<+6UtC)y60yD5J5`oo_zSM;l21@CY<0_|)NME5 zs)kHCMBa5YzB#N=W2aR?y9((~WuYwwf+HAc2mvU>NYlxOTvGf^Ye3za?*f-qUs^`a zT3>RPh9*Jf%3*bf|kqtnD_Buxv!<9N>BbuD#uYv-q^ z%RDnd7a3O4M9Y~TNISS@9K}JDkdg@>x8E6@n8jF=6qiDV+}{!V)(o?ykcr0sxBGEx zo!X;pc=r{H^vw6ztV5VZXBa4~(ujB$rZQ|AaGN@J7#q%2nU9gJ)g6dcj}zYB1& z@iFE0vMQVxa|v7tDHS$gwX$Ihc#M^DXRC>J@Zk?dC(3uB_s~*W&m-01DFMQGWjj5x z5po1@1gPl!v1Yra@qPG{D;$bYLM3qOwpl~7f~l)#n< zP+6`!NYe3EE~4RFR#_e=7YctPRBt6$He@`%e5m}f$M%yzC2S0<1}hRPjO>HJY~ z*dx(nbMbjv*;o&k{qzBdF|lS;UNVKziV=gbLq}UOCwr8GT5E9oRYQ}+>DhbQ1R=lj zgcNJN8|D)$Mx3#c+t@lhqcDUnHGVt0&EyQ{b5)=52B(VTzw=pQ^ba3`JB@BU^lS`_ zJEiLzgU#Acd_!}FMxCWC**FP^i#P}bYzNs78)#uSejEtYLbG>JJ7Igtho2oKQ;XW~ z4eMGO+t!_;G^V6c&R`5Tg+Pz2ToN(aybq4Q0ssie_{`t*DO%V7FaZ`{MBobFc9|pV z70o5ayHGJo9$$&Pgbs)pWNzduAcbh?~U?_P)(ve0S*3H%eNF&a5XR=!J#4c z;t992n7ZJr{*%`^dU1d-ALE8!3i#v;3r4r%j+JFCe=%3Vj=8{aXe zs)jrcUBZ=;LudcTUXj2ub>K5!{HHFHJ}Trx(PYugbQ8yK7&sqX;(;|UWjk3tGs3zuceeX)i4i_jA8Qz2Bc%DxN8 zXw!$+9jBtEHd1y90bYG4f8DcJM)Ab!M39tH5zz94*MAvnhA377@buNupSOUU3j8~> zd6&hk^ENRCp9T?_QUHk<=(&9Q^MJ^pi;nKOYNR@?L=RCSmKMJ5UQJQ`X!i~(gD*P! zs`RobzJG3Ra_Pg+WZUXUmMU$ilpwfcEti6)mw(~MZ0q!^sza>#jv!-+7B6F3QuMWg zVO!rXwD+lF1BBTito?ml-CV3vxuek~TKuOX^N6sol$v*{_%nAuD7i81eXm^Lz(Z~I z2Xj_Dts#G0&C;PV_Wkq*1QvB7+Post4={v;gk7b9u%#DC_bh(iJm$rqog^{JEx6NE zrs5^2SEL$|98#2WV#iG@L6cq|)SuTMSfGocPl65wUd^|5Lbpnb(;t>-Qu2jvANLgv zdte0vED-3C@^BdyHWLL(7{G$WA02z@JG!T-U^Q7HZ(7Bs&vchkh(p&}KvnS{MG^i6 z4r){gJp9p7WyWOEiKA2Cm6EXIn&&gk|Fc6^78OpPrX4ExCFE=SD$xcH;C2eB^{XTI zaxz_Cef*Yj==w_i_BTGXP;8C&f? z*QEM>={jFM8)lWAR870pG4XEWsl%%K|82S5b=9hVz7p_6i-d(Iyvq76&a#PV zR;VbQV|n?mg}&(ehClg%tK%IjgtnTR-u)lxH06XxXqH0soAZbB_Rm)XX=6Nge1uoG7 z9vQM_S~2h53n|W`y{{R9+=08rv~MohI_v4-BU^7fZ0-A}#b5{AOSTJm+(J;9yw%pD zX6u62GJ&@HKX5zQwq~j8T!Hrv-Mk^QSB5cu09L03{ToDO7jikM0WAcsjW>D}^jqCF zT0DEZ@K^KO_MD*%M!+V)lGVU6?LpX)eQVXEmq}R`NIJv;kBitJ!nW?0OxTVlu2ADf zE{A!*0g3%nwVcBD+AgT5bGx@WOnQk{zRpiZ4HhP`3BF%N|HdqPbbiV5)7x)kzC3ID zZ;27>0^mrMgWc7evsbQY`l`l})wr+e;=8U_!2&B77;1qL!N8y)eTJ2lf#CvhR~!Qa mc;sM|90DP5A*JW%f2r=u1xt!e4gwD_V(@hJb6Mw<&;$SznOm^{ diff --git a/packages/azure_logs/manifest.yml b/packages/azure_logs/manifest.yml index 9c66a215c54d..43cd8c59fb75 100644 --- a/packages/azure_logs/manifest.yml +++ b/packages/azure_logs/manifest.yml @@ -14,16 +14,9 @@ conditions: version: "^8.13.0" elastic: subscription: "basic" -screenshots: - - src: /img/sample-screenshot.png - title: Sample screenshot - size: 600x600 - type: image/png icons: - - src: /img/sample-logo.svg - title: Sample logo - size: 32x32 - type: image/svg+xml + - src: "/img/icon.svg" + type: "image/svg+xml" policy_templates: - name: azure-logs type: logs From 31e9012e8965b3a1a964b6146933bad4bcb5b529 Mon Sep 17 00:00:00 2001 From: Maurizio Branca Date: Tue, 29 Oct 2024 13:26:50 +0100 Subject: [PATCH 07/19] Add integration docs --- packages/azure_logs/_dev/build/docs/README.md | 401 +++++++++++++++++ packages/azure_logs/docs/README.md | 411 ++++++++++++++++-- 2 files changed, 765 insertions(+), 47 deletions(-) create mode 100644 packages/azure_logs/_dev/build/docs/README.md diff --git a/packages/azure_logs/_dev/build/docs/README.md b/packages/azure_logs/_dev/build/docs/README.md new file mode 100644 index 000000000000..adee65dec748 --- /dev/null +++ b/packages/azure_logs/_dev/build/docs/README.md @@ -0,0 +1,401 @@ +# Custom Azure Logs Input + +The Custom Azure Logs Input integration collects logs from Azure Event Hub. + +Use the integration to collect logs from: + +* Azure services that support exporting logs to Event Hub +* Any other source that can send logs to an Event Hub + +## Data streams + +The Custom Azure Logs Input integration collects one types of data streams: logs. + +The integration does not comes with a pre-defined data stream. You can select your dataset and namespace of choice when configuring the integration. + +For example, if you select `azure.custom` as your dataset, and `default` as your namespace, the integration will send the data to the `logs-azure.custom-default` data stream. + +Custom Logs integrations give you all the flexibility you need to configure the integration to your needs. + +## Requirements + +You need Elasticsearch for storing and searching your data and Kibana for visualizing and managing it. +You can use our hosted Elasticsearch Service on Elastic Cloud, which is recommended, or self-manage the Elastic Stack on your own hardware. + +Before using the Custom Azure Logs Input you will need: + +* One **event hub** to store in-flight logs exported by Azure services (or other sources) and make them available to Elastic Agent. +* A **storage account** to store information about logs consumed by the Elastic Agent. + +### Event Hub + +[Azure Event Hubs](https://learn.microsoft.com/en-us/azure/event-hubs/event-hubs-about) is a data streaming platform and event ingestion service. It can receive and temporary store millions of events. + +Elastic Agent with the Custom Azure Logs Input integration will consume logs from the Event Hubs service. + +```text + ┌────────────────┐ ┌───────────┐ + │ myeventhub │ │ Elastic │ + │ <> │─────▶│ Agent │ + └────────────────┘ └───────────┘ +``` + +To learn more about Event Hubs, refer to [Features and terminology in Azure Event Hubs](https://learn.microsoft.com/en-us/azure/event-hubs/event-hubs-features). + +### Storage Account Container + +The [Storage account](https://learn.microsoft.com/en-us/azure/storage/common/storage-account-overview) is a versatile Azure service that allows you to store data in various storage types, including blobs, file shares, queues, tables, and disks. + +The Custom Azure Logs Input integration requires a Storage account container to work. + +The integration uses the Storage Account container for checkpointing; it stores data about the Consumer Group (state, position, or offset) and shares it among the Elastic Agents. Sharing such information allows multiple Elastic Agents assigned to the same agent policy to work together; this enables horizontal scaling of the logs processing when required. + +```text + ┌────────────────┐ ┌───────────┐ + │ myeventhub │ logs │ Elastic │ + │ <> │────────────────────▶│ Agent │ + └────────────────┘ └───────────┘ + │ + consumer group info │ + ┌────────────────┐ (state, position, or │ + │ log-myeventhub │ offset) │ + │ <> │◀───────────────────────────┘ + └────────────────┘ +``` + +The Elastic Agent automatically creates one container for the Custom Azure Logs Input integration. The Agent will then create one blob for each partition on the event hub. + +For example, if the integration is configured to fetch data from an event hub with four partitions, the Agent will create the following: + +* One storage account container. +* Four blobs in that container. + +The information stored in the blobs is small (usually < 500 bytes per blob) and accessed frequently. Elastic recommends using the Hot storage tier. + +You need to keep the Storage Account container as long as you need to run the integration with the Elastic Agent. If you delete a Storage Account container, the Elastic Agent will stop working and create a new one the next time it starts. + +By deleting a Storage Account container, the Elastic Agent will lose track of the last message processed and start processing messages from the beginning of the event hub retention period. + +## Setup + +Before adding the integration, you must complete the following tasks. + +### Create an Event Hub + +The event hub receives the logs exported from the Azure service and makes them available to the Elastic Agent to pick up. + +Here's the high-level overview of the required steps: + +* Create a resource group, or select an existing one. +* Create an Event Hubs namespace. +* Create an event hub. + +For a detailed step-by-step guide, check the quickstart [Create an event hub using Azure portal](https://learn.microsoft.com/en-us/azure/event-hubs/event-hubs-create). + +Take note of the event hub **Name**, which you will use later when specifying an **eventhub** in the integration settings. + +#### Event Hubs Namespace vs Event Hub + +You should use the event hub name (not the Event Hubs namespace name) as a value for the **eventhub** option in the integration settings. + +If you are new to Event Hubs, think of the Event Hubs namespace as the cluster and the event hub as the topic. You will typically have one cluster and multiple topics. + +If you are familiar with Kafka, here's a conceptual mapping between the two: + +| Kafka Concept | Event Hub Concept | +|----------------|-------------------| +| Cluster | Namespace | +| Topic | An event hub | +| Partition | Partition | +| Consumer Group | Consumer Group | +| Offset | Offset | + +#### How many partitions? + +The number of partitions is essential to balance the event hub cost and performance. + +Here are a few examples with one or multiple agents, with recommendations on picking the correct number of partitions for your use case. + +##### Single Agent + +With a single Agent deployment, increasing the number of partitions on the event hub is the primary driver in scale-up performances. The Agent creates one worker for each partition. + +```text +┌ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ┐ ┌ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ┐ + +│ │ │ │ + +│ ┌─────────────────┐ │ │ ┌─────────────────┐ │ + │ partition 0 │◀───────────│ worker │ +│ └─────────────────┘ │ │ └─────────────────┘ │ + ┌─────────────────┐ ┌─────────────────┐ +│ │ partition 1 │◀──┼────┼───│ worker │ │ + └─────────────────┘ └─────────────────┘ +│ ┌─────────────────┐ │ │ ┌─────────────────┐ │ + │ partition 2 │◀────────── │ worker │ +│ └─────────────────┘ │ │ └─────────────────┘ │ + ┌─────────────────┐ ┌─────────────────┐ +│ │ partition 3 │◀──┼────┼───│ worker │ │ + └─────────────────┘ └─────────────────┘ +│ │ │ │ + +│ │ │ │ + +└ Event Hub ─ ─ ─ ─ ─ ─ ─ ┘ └ Agent ─ ─ ─ ─ ─ ─ ─ ─ ─ ┘ +``` + +##### Two or more Agents + +With more than one Agent, setting the number of partitions is crucial. The agents share the existing partitions to scale out performance and improve availability. + +The number of partitions must be at least the number of agents. + +```text +┌ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ┐ ┌ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ┐ + +│ │ │ ┌─────────────────┐ │ + ┌──────│ worker │ +│ ┌─────────────────┐ │ │ │ └─────────────────┘ │ + │ partition 0 │◀────┘ ┌─────────────────┐ +│ └─────────────────┘ │ ┌──┼───│ worker │ │ + ┌─────────────────┐ │ └─────────────────┘ +│ │ partition 1 │◀──┼─┘ │ │ + └─────────────────┘ ─Agent─ ─ ─ ─ ─ ─ ─ ─ ─ ─ +│ ┌─────────────────┐ │ ┌ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ┐ + │ partition 2 │◀────┐ +│ └─────────────────┘ │ │ │ ┌─────────────────┐ │ + ┌─────────────────┐ └─────│ worker │ +│ │ partition 3 │◀──┼─┐ │ └─────────────────┘ │ + └─────────────────┘ │ ┌─────────────────┐ +│ │ └──┼──│ worker │ │ + └─────────────────┘ +│ │ │ │ + +└ Event Hub ─ ─ ─ ─ ─ ─ ─ ┘ └ Agent ─ ─ ─ ─ ─ ─ ─ ─ ─ ┘ +``` + +##### Recommendations + +Create an event hub with at least two partitions. Two partitions allow low-volume deployment to support high availability with two agents. Consider creating four partitions or more to handle medium-volume deployments with availability. + +To learn more about event hub partitions, read an in-depth guide from Microsoft at https://learn.microsoft.com/en-us/azure/event-hubs/event-hubs-create. + +To learn more about event hub partition from the performance perspective, check the scalability-focused document at https://learn.microsoft.com/en-us/azure/event-hubs/event-hubs-scalability#partitions. + +#### Consumer Group + +Like all other event hub clients, Elastic Agent needs a consumer group name to access the event hub. + +A Consumer Group is a view (state, position, or offset) of an entire event hub. Consumer groups enable multiple agents to each have a separate view of the event stream, and to read the logs independently at their own pace and with their own offsets. + +Consumer groups allow multiple Elastic Agents assigned to the same agent policy to work together; this enables horizontal scaling of the logs processing when required. + +In most cases, you can use the default consumer group named `$Default`. If `$Default` is already used by other applications, you can create a consumer group dedicated to the Azure Logs integration. + +#### Connection string + +The Elastic Agent requries a connection string to access the event hub and fetch the exported logs. The connection string contains details about the event hub used and the credentials required to access it. + +To get the connection string for your Event Hubs namespace: + +1. Visit the **Event Hubs namespace** you created in a previous step. +1. Select **Settings** > **Shared access policies**. + +Create a new Shared Access Policy (SAS): + +1. Select **Add** to open the creation panel. +1. Add a **Policy name** (for example, "ElasticAgent"). +1. Select the **Listen** claim. +1. Select **Create**. + +When the SAS Policy is ready, select it to display the information panel. + +Take note of the **Connection string–primary key**, which you will use later when specifying a **connection_string** in the integration settings. + +### Create a Diagnostic Settings + +The diagnostic settings export the logs from Azure services to a destination and in order to use Azure Logs integration, it must be an event hubb. + +To create a diagnostic settings to export logs: + +1. Locate the diagnostic settings for the service (for example, Microsoft Entra ID). +1. Select diagnostic settings in the **Monitoring** section of the service. Note that different services may place the diagnostic settings in different positions. +1. Select **Add diagnostic settings**. + +In the diagnostic settings page you have to select the source **log categories** you want to export and then select their **destination**. + +#### Select log categories + +Each Azure services exports a well-defined list of log categories. Check the individual integration doc to learn which log categories are supported by the integration. + +#### Select the destination + +Select the **subscription** and the **Event Hubs namespace** you previously created. Select the event hub dedicated to this integration. + +```text + ┌───────────────┐ ┌──────────────┐ ┌───────────────┐ ┌───────────┐ + │ MS Entra ID │ │ Diagnostic │ │ adlogs │ │ Elastic │ + │ <> ├──▶│ Settings │──▶│ <> │─────▶│ Agent │ + └───────────────┘ └──────────────┘ └───────────────┘ └───────────┘ +``` + +### Create a Storage account container + +The Elastic Agent stores the consumer group information (state, position, or offset) in a storage account container. Making this information available to all agents allows them to share the logs processing and resume from the last processed logs after a restart. + +NOTE: Use the storage account as a checkpoint store only. + +To create the storage account: + +1. Sign in to the [Azure Portal](https://portal.azure.com/) and create your storage account. +1. While configuring your project details, make sure you select the following recommended default settings: + - Hierarchical namespace: disabled + - Minimum TLS version: Version 1.2 + - Access tier: Hot + - Enable soft delete for blobs: disabled + - Enable soft delete for containers: disabled + +1. When the new storage account is ready, you need to take note of the storage account name and the storage account access keys, as you will use them later to authenticate your Elastic application’s requests to this storage account. + +This is the final diagram of the a setup for collecting Activity logs from the Azure Monitor service. + +```text + ┌───────────────┐ ┌──────────────┐ ┌────────────────┐ ┌───────────┐ + │ MS Entra ID │ │ Diagnostic │ │ adlogs │ logs │ Elastic │ + │ <> ├──▶│ Settings │──▶│ <> │────────▶│ Agent │ + └───────────────┘ └──────────────┘ └────────────────┘ └───────────┘ + │ + ┌──────────────┐ consumer group info │ + │ azurelogs │ (state, position, or │ + │<> │◀───────────────offset)──────────────┘ + └──────────────┘ +``` + +#### How many Storage account containers? + +The Elastic Agent can use one Storage Account (SA) for multiple integrations. + +The Agent creates one SA container for the integration. The SA container name is a combination of the event hub name and a prefix (`azure-eventhub-input-[eventhub]`). + +### Running the integration behind a firewall + +When you run the Elastic Agent behind a firewall, to ensure proper communication with the necessary components, you need to allow traffic on port `5671` and `5672` for the event hub, and port `443` for the Storage Account container. + +```text +┌────────────────────────────────┐ ┌───────────────────┐ ┌───────────────────┐ +│ │ │ │ │ │ +│ ┌────────────┐ ┌───────────┐ │ │ ┌──────────────┐ │ │ ┌───────────────┐ │ +│ │ diagnostic │ │ event hub │ │ │ │azure-eventhub│ │ │ │ activity logs │ │ +│ │ setting │──▶│ │◀┼AMQP─│ <> │─┼──┼▶│<>│ │ +│ └────────────┘ └───────────┘ │ │ └──────────────┘ │ │ └───────────────┘ │ +│ │ │ │ │ │ │ +│ │ │ │ │ │ │ +│ │ │ │ │ │ │ +│ ┌─────────────┬─────HTTPS─┼──────────┘ │ │ │ +│ ┌───────┼─────────────┼──────┐ │ │ │ │ │ +│ │ │ │ │ │ │ │ │ │ +│ │ ▼ ▼ │ │ └─Agent─────────────┘ └─Elastic Cloud─────┘ +│ │ ┌──────────┐ ┌──────────┐ │ │ +│ │ │ 0 │ │ 1 │ │ │ +│ │ │ <> │ │ <> │ │ │ +│ │ └──────────┘ └──────────┘ │ │ +│ │ │ │ +│ │ │ │ +│ └─Storage Account Container──┘ │ +│ │ +│ │ +└─Azure──────────────────────────┘ +``` + +#### Event Hub + +Port `5671` and `5672` are commonly used for secure communication with the event hub. These ports are used to receive events. By allowing traffic on these ports, the Elastic Agent can establish a secure connection with the event hub. + +For more information, check the following documents: + +* [What ports do I need to open on the firewall?](https://learn.microsoft.com/en-us/azure/event-hubs/event-hubs-faq#what-ports-do-i-need-to-open-on-the-firewall) from the [Event Hubs frequently asked questions](https://learn.microsoft.com/en-us/azure/event-hubs/event-hubs-faq#what-ports-do-i-need-to-open-on-the-firewall). +* [AMQP outbound port requirements](https://learn.microsoft.com/en-us/azure/service-bus-messaging/service-bus-amqp-protocol-guide#amqp-outbound-port-requirements) + +#### Storage Account Container + +Port `443` is used for secure communication with the Storage Account container. This port is commonly used for HTTPS traffic. By allowing traffic on port 443, the Elastic Agent can securely access and interact with the Storage Account container, which is essential for storing and retrieving checkpoint data for each event hub partition. + +#### DNS + +Optionally, you can restrict the traffic to the following domain names: + +```text +*.servicebus.windows.net +*.blob.core.windows.net +*.cloudapp.net +``` + +## Settings + +Use the following settings to configure the Azure Logs integration when you add it to Fleet. + +`eventhub` : +_string_ +A fully managed, real-time data ingestion service. Elastic recommends using only letters, numbers, and the hyphen (-) character for event hub names to maximize compatibility. You can use existing event hubs having underscores (_) in the event hub name; in this case, the integration will replace underscores with hyphens (-) when it uses the event hub name to create dependent Azure resources behind the scenes (e.g., the storage account container to store event hub consumer offsets). Elastic also recommends using a separate event hub for each log type as the field mappings of each log type differ. +Default value `insights-operational-logs`. + +`consumer_group` : +_string_ +Enable the publish/subscribe mechanism of Event Hubs with consumer groups. A consumer group is a view (state, position, or offset) of an entire event hub. Consumer groups enable multiple consuming applications to each have a separate view of the event stream, and to read the stream independently at their own pace and with their own offsets. +Default value: `$Default` + +`connection_string` : +_string_ + +The connection string required to communicate with Event Hubs. See [Get an Event Hubs connection string](https://docs.microsoft.com/en-us/azure/event-hubs/event-hubs-get-connection-string) for more information. + +A Blob Storage account is required to store/retrieve/update the offset or state of the event hub messages. This allows the integration to start back up at the spot that it stopped processing messages. + +`storage_account` : +_string_ +The name of the storage account that the state/offsets will be stored and updated. + +`storage_account_key` : +_string_ +The storage account key. Used to authorize access to data in your storage account. + +`storage_account_container` : +_string_ +The storage account container where the integration stores the checkpoint data for the consumer group. It is an advanced option to use with extreme care. You MUST use a dedicated storage account container for each Azure log type (activity, sign-in, audit logs, and others). DO NOT REUSE the same container name for more than one Azure log type. See [Container Names](https://docs.microsoft.com/en-us/rest/api/storageservices/naming-and-referencing-containers--blobs--and-metadata#container-names) for details on naming rules from Microsoft. The integration generates a default container name if not specified. + +`pipeline` : +_string_ +Optional. Overrides the default ingest pipeline for this integration. + +`resource_manager_endpoint` : +_string_ +Optional. By default, the integration uses the Azure public environment. To override this and use a different Azure environment, users can provide a specific resource manager endpoint + +Examples: + +* Azure ChinaCloud: `https://management.chinacloudapi.cn/` +* Azure GermanCloud: `https://management.microsoftazure.de/` +* Azure PublicCloud: `https://management.azure.com/` +* Azure USGovernmentCloud: `https://management.usgovcloudapi.net/` + +This setting can also be used to define your own endpoints, like for hybrid cloud models. + +## Handling Malformed JSON in Azure Logs + +Azure services have been observed to send [malformed JSON](https://learn.microsoft.com/en-us/answers/questions/1001797/invalid-json-logs-produced-for-function-apps) documents occasionally. These logs can disrupt the expected JSON formatting and lead to parsing issues during processing. + +To address this issue, the advanced settings section of each data stream offers two sanitization options: + +* Sanitizes New Lines: removes new lines in logs. +* Sanitizes Single Quotes: replaces single quotes with double quotes in logs, excluding single quotes occurring within double quotes. + +Malformed logs can be indentified by: + +* Presence of a records array in the message field, indicating a failure to unmarshal the byte slice. +* Existence of an error.message field containing the text "Received invalid JSON from the Azure Cloud platform. Unable to parse the source log message." + +Known data streams that might produce malformed logs: + +* Platform Logs +* Spring Apps Logs +* PostgreSQL Flexible Servers Logs diff --git a/packages/azure_logs/docs/README.md b/packages/azure_logs/docs/README.md index badcc8402399..adee65dec748 100644 --- a/packages/azure_logs/docs/README.md +++ b/packages/azure_logs/docs/README.md @@ -1,84 +1,401 @@ - - - # Custom Azure Logs Input - +* Azure services that support exporting logs to Event Hub +* Any other source that can send logs to an Event Hub ## Data streams - +The Custom Azure Logs Input integration collects one types of data streams: logs. - - +The integration does not comes with a pre-defined data stream. You can select your dataset and namespace of choice when configuring the integration. - - +For example, if you select `azure.custom` as your dataset, and `default` as your namespace, the integration will send the data to the `logs-azure.custom-default` data stream. - +Custom Logs integrations give you all the flexibility you need to configure the integration to your needs. ## Requirements You need Elasticsearch for storing and searching your data and Kibana for visualizing and managing it. You can use our hosted Elasticsearch Service on Elastic Cloud, which is recommended, or self-manage the Elastic Stack on your own hardware. - +Before using the Custom Azure Logs Input you will need: + +* One **event hub** to store in-flight logs exported by Azure services (or other sources) and make them available to Elastic Agent. +* A **storage account** to store information about logs consumed by the Elastic Agent. + +### Event Hub + +[Azure Event Hubs](https://learn.microsoft.com/en-us/azure/event-hubs/event-hubs-about) is a data streaming platform and event ingestion service. It can receive and temporary store millions of events. + +Elastic Agent with the Custom Azure Logs Input integration will consume logs from the Event Hubs service. + +```text + ┌────────────────┐ ┌───────────┐ + │ myeventhub │ │ Elastic │ + │ <> │─────▶│ Agent │ + └────────────────┘ └───────────┘ +``` + +To learn more about Event Hubs, refer to [Features and terminology in Azure Event Hubs](https://learn.microsoft.com/en-us/azure/event-hubs/event-hubs-features). + +### Storage Account Container + +The [Storage account](https://learn.microsoft.com/en-us/azure/storage/common/storage-account-overview) is a versatile Azure service that allows you to store data in various storage types, including blobs, file shares, queues, tables, and disks. + +The Custom Azure Logs Input integration requires a Storage account container to work. + +The integration uses the Storage Account container for checkpointing; it stores data about the Consumer Group (state, position, or offset) and shares it among the Elastic Agents. Sharing such information allows multiple Elastic Agents assigned to the same agent policy to work together; this enables horizontal scaling of the logs processing when required. + +```text + ┌────────────────┐ ┌───────────┐ + │ myeventhub │ logs │ Elastic │ + │ <> │────────────────────▶│ Agent │ + └────────────────┘ └───────────┘ + │ + consumer group info │ + ┌────────────────┐ (state, position, or │ + │ log-myeventhub │ offset) │ + │ <> │◀───────────────────────────┘ + └────────────────┘ +``` + +The Elastic Agent automatically creates one container for the Custom Azure Logs Input integration. The Agent will then create one blob for each partition on the event hub. + +For example, if the integration is configured to fetch data from an event hub with four partitions, the Agent will create the following: + +* One storage account container. +* Four blobs in that container. + +The information stored in the blobs is small (usually < 500 bytes per blob) and accessed frequently. Elastic recommends using the Hot storage tier. + +You need to keep the Storage Account container as long as you need to run the integration with the Elastic Agent. If you delete a Storage Account container, the Elastic Agent will stop working and create a new one the next time it starts. + +By deleting a Storage Account container, the Elastic Agent will lose track of the last message processed and start processing messages from the beginning of the event hub retention period. ## Setup - +Before adding the integration, you must complete the following tasks. + +### Create an Event Hub + +The event hub receives the logs exported from the Azure service and makes them available to the Elastic Agent to pick up. + +Here's the high-level overview of the required steps: + +* Create a resource group, or select an existing one. +* Create an Event Hubs namespace. +* Create an event hub. + +For a detailed step-by-step guide, check the quickstart [Create an event hub using Azure portal](https://learn.microsoft.com/en-us/azure/event-hubs/event-hubs-create). + +Take note of the event hub **Name**, which you will use later when specifying an **eventhub** in the integration settings. + +#### Event Hubs Namespace vs Event Hub + +You should use the event hub name (not the Event Hubs namespace name) as a value for the **eventhub** option in the integration settings. + +If you are new to Event Hubs, think of the Event Hubs namespace as the cluster and the event hub as the topic. You will typically have one cluster and multiple topics. + +If you are familiar with Kafka, here's a conceptual mapping between the two: + +| Kafka Concept | Event Hub Concept | +|----------------|-------------------| +| Cluster | Namespace | +| Topic | An event hub | +| Partition | Partition | +| Consumer Group | Consumer Group | +| Offset | Offset | + +#### How many partitions? + +The number of partitions is essential to balance the event hub cost and performance. + +Here are a few examples with one or multiple agents, with recommendations on picking the correct number of partitions for your use case. + +##### Single Agent + +With a single Agent deployment, increasing the number of partitions on the event hub is the primary driver in scale-up performances. The Agent creates one worker for each partition. + +```text +┌ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ┐ ┌ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ┐ + +│ │ │ │ + +│ ┌─────────────────┐ │ │ ┌─────────────────┐ │ + │ partition 0 │◀───────────│ worker │ +│ └─────────────────┘ │ │ └─────────────────┘ │ + ┌─────────────────┐ ┌─────────────────┐ +│ │ partition 1 │◀──┼────┼───│ worker │ │ + └─────────────────┘ └─────────────────┘ +│ ┌─────────────────┐ │ │ ┌─────────────────┐ │ + │ partition 2 │◀────────── │ worker │ +│ └─────────────────┘ │ │ └─────────────────┘ │ + ┌─────────────────┐ ┌─────────────────┐ +│ │ partition 3 │◀──┼────┼───│ worker │ │ + └─────────────────┘ └─────────────────┘ +│ │ │ │ + +│ │ │ │ + +└ Event Hub ─ ─ ─ ─ ─ ─ ─ ┘ └ Agent ─ ─ ─ ─ ─ ─ ─ ─ ─ ┘ +``` + +##### Two or more Agents + +With more than one Agent, setting the number of partitions is crucial. The agents share the existing partitions to scale out performance and improve availability. + +The number of partitions must be at least the number of agents. + +```text +┌ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ┐ ┌ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ┐ + +│ │ │ ┌─────────────────┐ │ + ┌──────│ worker │ +│ ┌─────────────────┐ │ │ │ └─────────────────┘ │ + │ partition 0 │◀────┘ ┌─────────────────┐ +│ └─────────────────┘ │ ┌──┼───│ worker │ │ + ┌─────────────────┐ │ └─────────────────┘ +│ │ partition 1 │◀──┼─┘ │ │ + └─────────────────┘ ─Agent─ ─ ─ ─ ─ ─ ─ ─ ─ ─ +│ ┌─────────────────┐ │ ┌ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ┐ + │ partition 2 │◀────┐ +│ └─────────────────┘ │ │ │ ┌─────────────────┐ │ + ┌─────────────────┐ └─────│ worker │ +│ │ partition 3 │◀──┼─┐ │ └─────────────────┘ │ + └─────────────────┘ │ ┌─────────────────┐ +│ │ └──┼──│ worker │ │ + └─────────────────┘ +│ │ │ │ + +└ Event Hub ─ ─ ─ ─ ─ ─ ─ ┘ └ Agent ─ ─ ─ ─ ─ ─ ─ ─ ─ ┘ +``` + +##### Recommendations + +Create an event hub with at least two partitions. Two partitions allow low-volume deployment to support high availability with two agents. Consider creating four partitions or more to handle medium-volume deployments with availability. + +To learn more about event hub partitions, read an in-depth guide from Microsoft at https://learn.microsoft.com/en-us/azure/event-hubs/event-hubs-create. + +To learn more about event hub partition from the performance perspective, check the scalability-focused document at https://learn.microsoft.com/en-us/azure/event-hubs/event-hubs-scalability#partitions. + +#### Consumer Group + +Like all other event hub clients, Elastic Agent needs a consumer group name to access the event hub. + +A Consumer Group is a view (state, position, or offset) of an entire event hub. Consumer groups enable multiple agents to each have a separate view of the event stream, and to read the logs independently at their own pace and with their own offsets. + +Consumer groups allow multiple Elastic Agents assigned to the same agent policy to work together; this enables horizontal scaling of the logs processing when required. + +In most cases, you can use the default consumer group named `$Default`. If `$Default` is already used by other applications, you can create a consumer group dedicated to the Azure Logs integration. + +#### Connection string + +The Elastic Agent requries a connection string to access the event hub and fetch the exported logs. The connection string contains details about the event hub used and the credentials required to access it. + +To get the connection string for your Event Hubs namespace: + +1. Visit the **Event Hubs namespace** you created in a previous step. +1. Select **Settings** > **Shared access policies**. + +Create a new Shared Access Policy (SAS): + +1. Select **Add** to open the creation panel. +1. Add a **Policy name** (for example, "ElasticAgent"). +1. Select the **Listen** claim. +1. Select **Create**. + +When the SAS Policy is ready, select it to display the information panel. + +Take note of the **Connection string–primary key**, which you will use later when specifying a **connection_string** in the integration settings. + +### Create a Diagnostic Settings + +The diagnostic settings export the logs from Azure services to a destination and in order to use Azure Logs integration, it must be an event hubb. + +To create a diagnostic settings to export logs: + +1. Locate the diagnostic settings for the service (for example, Microsoft Entra ID). +1. Select diagnostic settings in the **Monitoring** section of the service. Note that different services may place the diagnostic settings in different positions. +1. Select **Add diagnostic settings**. + +In the diagnostic settings page you have to select the source **log categories** you want to export and then select their **destination**. + +#### Select log categories + +Each Azure services exports a well-defined list of log categories. Check the individual integration doc to learn which log categories are supported by the integration. + +#### Select the destination + +Select the **subscription** and the **Event Hubs namespace** you previously created. Select the event hub dedicated to this integration. + +```text + ┌───────────────┐ ┌──────────────┐ ┌───────────────┐ ┌───────────┐ + │ MS Entra ID │ │ Diagnostic │ │ adlogs │ │ Elastic │ + │ <> ├──▶│ Settings │──▶│ <> │─────▶│ Agent │ + └───────────────┘ └──────────────┘ └───────────────┘ └───────────┘ +``` + +### Create a Storage account container + +The Elastic Agent stores the consumer group information (state, position, or offset) in a storage account container. Making this information available to all agents allows them to share the logs processing and resume from the last processed logs after a restart. + +NOTE: Use the storage account as a checkpoint store only. + +To create the storage account: + +1. Sign in to the [Azure Portal](https://portal.azure.com/) and create your storage account. +1. While configuring your project details, make sure you select the following recommended default settings: + - Hierarchical namespace: disabled + - Minimum TLS version: Version 1.2 + - Access tier: Hot + - Enable soft delete for blobs: disabled + - Enable soft delete for containers: disabled + +1. When the new storage account is ready, you need to take note of the storage account name and the storage account access keys, as you will use them later to authenticate your Elastic application’s requests to this storage account. + +This is the final diagram of the a setup for collecting Activity logs from the Azure Monitor service. + +```text + ┌───────────────┐ ┌──────────────┐ ┌────────────────┐ ┌───────────┐ + │ MS Entra ID │ │ Diagnostic │ │ adlogs │ logs │ Elastic │ + │ <> ├──▶│ Settings │──▶│ <> │────────▶│ Agent │ + └───────────────┘ └──────────────┘ └────────────────┘ └───────────┘ + │ + ┌──────────────┐ consumer group info │ + │ azurelogs │ (state, position, or │ + │<> │◀───────────────offset)──────────────┘ + └──────────────┘ +``` + +#### How many Storage account containers? + +The Elastic Agent can use one Storage Account (SA) for multiple integrations. + +The Agent creates one SA container for the integration. The SA container name is a combination of the event hub name and a prefix (`azure-eventhub-input-[eventhub]`). + +### Running the integration behind a firewall + +When you run the Elastic Agent behind a firewall, to ensure proper communication with the necessary components, you need to allow traffic on port `5671` and `5672` for the event hub, and port `443` for the Storage Account container. + +```text +┌────────────────────────────────┐ ┌───────────────────┐ ┌───────────────────┐ +│ │ │ │ │ │ +│ ┌────────────┐ ┌───────────┐ │ │ ┌──────────────┐ │ │ ┌───────────────┐ │ +│ │ diagnostic │ │ event hub │ │ │ │azure-eventhub│ │ │ │ activity logs │ │ +│ │ setting │──▶│ │◀┼AMQP─│ <> │─┼──┼▶│<>│ │ +│ └────────────┘ └───────────┘ │ │ └──────────────┘ │ │ └───────────────┘ │ +│ │ │ │ │ │ │ +│ │ │ │ │ │ │ +│ │ │ │ │ │ │ +│ ┌─────────────┬─────HTTPS─┼──────────┘ │ │ │ +│ ┌───────┼─────────────┼──────┐ │ │ │ │ │ +│ │ │ │ │ │ │ │ │ │ +│ │ ▼ ▼ │ │ └─Agent─────────────┘ └─Elastic Cloud─────┘ +│ │ ┌──────────┐ ┌──────────┐ │ │ +│ │ │ 0 │ │ 1 │ │ │ +│ │ │ <> │ │ <> │ │ │ +│ │ └──────────┘ └──────────┘ │ │ +│ │ │ │ +│ │ │ │ +│ └─Storage Account Container──┘ │ +│ │ +│ │ +└─Azure──────────────────────────┘ +``` + +#### Event Hub + +Port `5671` and `5672` are commonly used for secure communication with the event hub. These ports are used to receive events. By allowing traffic on these ports, the Elastic Agent can establish a secure connection with the event hub. + +For more information, check the following documents: + +* [What ports do I need to open on the firewall?](https://learn.microsoft.com/en-us/azure/event-hubs/event-hubs-faq#what-ports-do-i-need-to-open-on-the-firewall) from the [Event Hubs frequently asked questions](https://learn.microsoft.com/en-us/azure/event-hubs/event-hubs-faq#what-ports-do-i-need-to-open-on-the-firewall). +* [AMQP outbound port requirements](https://learn.microsoft.com/en-us/azure/service-bus-messaging/service-bus-amqp-protocol-guide#amqp-outbound-port-requirements) + +#### Storage Account Container + +Port `443` is used for secure communication with the Storage Account container. This port is commonly used for HTTPS traffic. By allowing traffic on port 443, the Elastic Agent can securely access and interact with the Storage Account container, which is essential for storing and retrieving checkpoint data for each event hub partition. + +#### DNS + +Optionally, you can restrict the traffic to the following domain names: + +```text +*.servicebus.windows.net +*.blob.core.windows.net +*.cloudapp.net +``` + +## Settings + +Use the following settings to configure the Azure Logs integration when you add it to Fleet. + +`eventhub` : +_string_ +A fully managed, real-time data ingestion service. Elastic recommends using only letters, numbers, and the hyphen (-) character for event hub names to maximize compatibility. You can use existing event hubs having underscores (_) in the event hub name; in this case, the integration will replace underscores with hyphens (-) when it uses the event hub name to create dependent Azure resources behind the scenes (e.g., the storage account container to store event hub consumer offsets). Elastic also recommends using a separate event hub for each log type as the field mappings of each log type differ. +Default value `insights-operational-logs`. + +`consumer_group` : +_string_ +Enable the publish/subscribe mechanism of Event Hubs with consumer groups. A consumer group is a view (state, position, or offset) of an entire event hub. Consumer groups enable multiple consuming applications to each have a separate view of the event stream, and to read the stream independently at their own pace and with their own offsets. +Default value: `$Default` + +`connection_string` : +_string_ -For step-by-step instructions on how to set up an integration, see the -[Getting started](https://www.elastic.co/guide/en/welcome-to-elastic/current/getting-started-observability.html) guide. +The connection string required to communicate with Event Hubs. See [Get an Event Hubs connection string](https://docs.microsoft.com/en-us/azure/event-hubs/event-hubs-get-connection-string) for more information. - +A Blob Storage account is required to store/retrieve/update the offset or state of the event hub messages. This allows the integration to start back up at the spot that it stopped processing messages. - - +`storage_account` : +_string_ +The name of the storage account that the state/offsets will be stored and updated. - - +`storage_account_container` : +_string_ +The storage account container where the integration stores the checkpoint data for the consumer group. It is an advanced option to use with extreme care. You MUST use a dedicated storage account container for each Azure log type (activity, sign-in, audit logs, and others). DO NOT REUSE the same container name for more than one Azure log type. See [Container Names](https://docs.microsoft.com/en-us/rest/api/storageservices/naming-and-referencing-containers--blobs--and-metadata#container-names) for details on naming rules from Microsoft. The integration generates a default container name if not specified. - - +Examples: - +This setting can also be used to define your own endpoints, like for hybrid cloud models. - - +## Handling Malformed JSON in Azure Logs - - +To address this issue, the advanced settings section of each data stream offers two sanitization options: - - +* Presence of a records array in the message field, indicating a failure to unmarshal the byte slice. +* Existence of an error.message field containing the text "Received invalid JSON from the Azure Cloud platform. Unable to parse the source log message." - +* Platform Logs +* Spring Apps Logs +* PostgreSQL Flexible Servers Logs From e55bc8126e81f83ca83c8a73d014278cfe5098da Mon Sep 17 00:00:00 2001 From: Maurizio Branca Date: Tue, 29 Oct 2024 13:27:45 +0100 Subject: [PATCH 08/19] Add tags, processors, and sanitization settings --- packages/azure_logs/manifest.yml | 42 ++++++++++++++++++++++++++++++++ 1 file changed, 42 insertions(+) diff --git a/packages/azure_logs/manifest.yml b/packages/azure_logs/manifest.yml index 43cd8c59fb75..087111b9486a 100644 --- a/packages/azure_logs/manifest.yml +++ b/packages/azure_logs/manifest.yml @@ -101,6 +101,48 @@ policy_templates: multi: false required: false show_user: false + - name: tags + type: text + title: Tags + multi: true + required: true + show_user: false + default: + - azure-eventhub + - forwarded + - name: processors + type: yaml + title: Processors + multi: false + required: false + show_user: false + description: > + Processors are used to reduce the number of fields in the exported + event or to enhance the event with metadata. This executes in the agent + before the logs are parsed. + See [Processors](https://www.elastic.co/guide/en/beats/filebeat/current/filtering-and-enhancing-data.html) + for details. + - name: sanitize_newlines + type: bool + title: Sanitizes New Lines + description: > + Removes new lines in logs to ensure proper formatting of JSON data and + avoid parsing issues during processing. + multi: false + required: false + show_user: false + default: false + - name: sanitize_singlequotes + required: true + show_user: false + title: Sanitizes Single Quotes + description: > + Replaces single quotes with double quotes (single quotes inside double + quotes are omitted) in logs to ensure proper formatting of JSON data + and avoid parsing issues during processing. + type: bool + multi: false + default: false owner: github: elastic/obs-ds-hosted-services type: elastic From e678854a4cf4c58e1cda2c2825514f80d5ad82be Mon Sep 17 00:00:00 2001 From: Maurizio Branca Date: Tue, 29 Oct 2024 20:05:34 +0100 Subject: [PATCH 09/19] Add parse_message config option --- packages/azure_logs/manifest.yml | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/packages/azure_logs/manifest.yml b/packages/azure_logs/manifest.yml index 087111b9486a..f0fe947ec291 100644 --- a/packages/azure_logs/manifest.yml +++ b/packages/azure_logs/manifest.yml @@ -93,6 +93,14 @@ policy_templates: replaces the default pipeline for this integration. required: false show_user: true + - name: parse_message + type: bool + title: Parse azure message + description: Apply minimal json parsing of the message, extracting resource details for fields as `resourceId`, `time` if found. + multi: false + required: false + show_user: true + default: false - name: resource_manager_endpoint type: text title: Resource Manager Endpoint From c11a733a27e0ebe1da3589ee56e46111e04060ab Mon Sep 17 00:00:00 2001 From: Maurizio Branca Date: Tue, 29 Oct 2024 20:13:14 +0100 Subject: [PATCH 10/19] Add route_message config option --- packages/azure_logs/agent/input/input.yml.hbs | 3 +++ packages/azure_logs/manifest.yml | 8 ++++++++ 2 files changed, 11 insertions(+) diff --git a/packages/azure_logs/agent/input/input.yml.hbs b/packages/azure_logs/agent/input/input.yml.hbs index 3f948b8ebe4c..e2a5b070307e 100644 --- a/packages/azure_logs/agent/input/input.yml.hbs +++ b/packages/azure_logs/agent/input/input.yml.hbs @@ -35,6 +35,9 @@ tags: {{#if parse_message}} - parse_message {{/if}} +{{#if route_message}} + - route_message +{{/if}} {{#each tags as |tag i|}} - {{tag}} {{/each}} diff --git a/packages/azure_logs/manifest.yml b/packages/azure_logs/manifest.yml index f0fe947ec291..1a29b010f30f 100644 --- a/packages/azure_logs/manifest.yml +++ b/packages/azure_logs/manifest.yml @@ -101,6 +101,14 @@ policy_templates: required: false show_user: true default: false + - name: route_message + type: bool + title: Route message + description: Route the message to a different data stream. + multi: false + required: false + show_user: true + default: false - name: resource_manager_endpoint type: text title: Resource Manager Endpoint From 86962f7fdc2fb318abcd169d21dabf069cc3035d Mon Sep 17 00:00:00 2001 From: Maurizio Branca Date: Wed, 30 Oct 2024 12:50:29 +0100 Subject: [PATCH 11/19] Remove route_message option It seems we cannot build OOTB routing into an input package --- packages/azure_logs/agent/input/input.yml.hbs | 3 --- packages/azure_logs/manifest.yml | 8 -------- 2 files changed, 11 deletions(-) diff --git a/packages/azure_logs/agent/input/input.yml.hbs b/packages/azure_logs/agent/input/input.yml.hbs index e2a5b070307e..3f948b8ebe4c 100644 --- a/packages/azure_logs/agent/input/input.yml.hbs +++ b/packages/azure_logs/agent/input/input.yml.hbs @@ -35,9 +35,6 @@ tags: {{#if parse_message}} - parse_message {{/if}} -{{#if route_message}} - - route_message -{{/if}} {{#each tags as |tag i|}} - {{tag}} {{/each}} diff --git a/packages/azure_logs/manifest.yml b/packages/azure_logs/manifest.yml index 1a29b010f30f..f0fe947ec291 100644 --- a/packages/azure_logs/manifest.yml +++ b/packages/azure_logs/manifest.yml @@ -101,14 +101,6 @@ policy_templates: required: false show_user: true default: false - - name: route_message - type: bool - title: Route message - description: Route the message to a different data stream. - multi: false - required: false - show_user: true - default: false - name: resource_manager_endpoint type: text title: Resource Manager Endpoint From 2d6035c2c18c8209d5ef1a0e84cbbd740dc7007a Mon Sep 17 00:00:00 2001 From: Maurizio Branca Date: Wed, 30 Oct 2024 13:00:56 +0100 Subject: [PATCH 12/19] Remove parse_message option I don't think we can (or want) offer this option. --- packages/azure_logs/agent/input/input.yml.hbs | 3 --- packages/azure_logs/manifest.yml | 8 -------- 2 files changed, 11 deletions(-) diff --git a/packages/azure_logs/agent/input/input.yml.hbs b/packages/azure_logs/agent/input/input.yml.hbs index 3f948b8ebe4c..57d44412dfb7 100644 --- a/packages/azure_logs/agent/input/input.yml.hbs +++ b/packages/azure_logs/agent/input/input.yml.hbs @@ -32,9 +32,6 @@ tags: {{#if preserve_original_event}} - preserve_original_event {{/if}} -{{#if parse_message}} - - parse_message -{{/if}} {{#each tags as |tag i|}} - {{tag}} {{/each}} diff --git a/packages/azure_logs/manifest.yml b/packages/azure_logs/manifest.yml index f0fe947ec291..087111b9486a 100644 --- a/packages/azure_logs/manifest.yml +++ b/packages/azure_logs/manifest.yml @@ -93,14 +93,6 @@ policy_templates: replaces the default pipeline for this integration. required: false show_user: true - - name: parse_message - type: bool - title: Parse azure message - description: Apply minimal json parsing of the message, extracting resource details for fields as `resourceId`, `time` if found. - multi: false - required: false - show_user: true - default: false - name: resource_manager_endpoint type: text title: Resource Manager Endpoint From 5fd7295586988d286d3e148d0390eca55e792eae Mon Sep 17 00:00:00 2001 From: Maurizio Branca Date: Wed, 30 Oct 2024 13:01:47 +0100 Subject: [PATCH 13/19] Make clear we're fetching logs from event hub --- packages/azure_logs/manifest.yml | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/packages/azure_logs/manifest.yml b/packages/azure_logs/manifest.yml index 087111b9486a..b8e668fa8b33 100644 --- a/packages/azure_logs/manifest.yml +++ b/packages/azure_logs/manifest.yml @@ -20,8 +20,8 @@ icons: policy_templates: - name: azure-logs type: logs - title: Azure Logs - description: Collect Azure logs from Event Hub + title: Collect Azure logs from Event Hub + description: Collect Azure logs from Event Hub using the azure-eventhub input. input: azure-eventhub template_path: input.yml.hbs vars: From 8a3a0104e3d8e8a775c42dba68d2af4989e20361 Mon Sep 17 00:00:00 2001 From: Maurizio Branca Date: Wed, 30 Oct 2024 17:07:45 +0100 Subject: [PATCH 14/19] Update change log and remove build info --- packages/azure_logs/changelog.yml | 6 +++--- packages/azure_logs/manifest.yml | 2 +- 2 files changed, 4 insertions(+), 4 deletions(-) diff --git a/packages/azure_logs/changelog.yml b/packages/azure_logs/changelog.yml index bde20454c815..11f1f3445d02 100644 --- a/packages/azure_logs/changelog.yml +++ b/packages/azure_logs/changelog.yml @@ -1,6 +1,6 @@ # newer versions go on top -- version: "0.1.0+build0002" +- version: "0.1.0" changes: - - description: Initial draft of the package + - description: Add Custom Azure Logs Input to collect log events from Azure Event Hubs type: enhancement - link: https://github.com/elastic/integrations/pull/1 # FIXME Replace with the real PR link + link: https://github.com/elastic/integrations/pull/11552 diff --git a/packages/azure_logs/manifest.yml b/packages/azure_logs/manifest.yml index b8e668fa8b33..0aa615dab899 100644 --- a/packages/azure_logs/manifest.yml +++ b/packages/azure_logs/manifest.yml @@ -1,7 +1,7 @@ format_version: 3.3.0 name: azure_logs title: "Custom Azure Logs Input" -version: 0.1.0+build0002 +version: 0.1.0 source: license: Elastic-2.0 description: "Collect log events from Azure Event Hubs with Elastic Agent" From c07ad7ed69829871874c637a2aa6d4bcab32762a Mon Sep 17 00:00:00 2001 From: Maurizio Branca Date: Wed, 30 Oct 2024 23:40:20 +0100 Subject: [PATCH 15/19] Add observability category --- packages/azure_logs/manifest.yml | 1 + 1 file changed, 1 insertion(+) diff --git a/packages/azure_logs/manifest.yml b/packages/azure_logs/manifest.yml index 0aa615dab899..a9fb2a4aa90e 100644 --- a/packages/azure_logs/manifest.yml +++ b/packages/azure_logs/manifest.yml @@ -9,6 +9,7 @@ type: input categories: - azure - custom + - observability conditions: kibana: version: "^8.13.0" From 53288b9574f89e3d310bd8e760b665e571cf611f Mon Sep 17 00:00:00 2001 From: Maurizio Branca Date: Wed, 30 Oct 2024 23:45:17 +0100 Subject: [PATCH 16/19] Update packages/azure_logs/docs/README.md Co-authored-by: Kavindu Dodanduwa --- packages/azure_logs/docs/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/packages/azure_logs/docs/README.md b/packages/azure_logs/docs/README.md index adee65dec748..3017bbee4b7f 100644 --- a/packages/azure_logs/docs/README.md +++ b/packages/azure_logs/docs/README.md @@ -9,7 +9,7 @@ Use the integration to collect logs from: ## Data streams -The Custom Azure Logs Input integration collects one types of data streams: logs. +The Custom Azure Logs Input integration collects one type of data stream: logs. The integration does not comes with a pre-defined data stream. You can select your dataset and namespace of choice when configuring the integration. From 003ef00902a6024f06444a436c5267a5e1fe88b1 Mon Sep 17 00:00:00 2001 From: Maurizio Branca Date: Wed, 30 Oct 2024 23:45:33 +0100 Subject: [PATCH 17/19] Update packages/azure_logs/docs/README.md Co-authored-by: Kavindu Dodanduwa --- packages/azure_logs/docs/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/packages/azure_logs/docs/README.md b/packages/azure_logs/docs/README.md index 3017bbee4b7f..3fc5af6d33b4 100644 --- a/packages/azure_logs/docs/README.md +++ b/packages/azure_logs/docs/README.md @@ -11,7 +11,7 @@ Use the integration to collect logs from: The Custom Azure Logs Input integration collects one type of data stream: logs. -The integration does not comes with a pre-defined data stream. You can select your dataset and namespace of choice when configuring the integration. +The integration does not use a pre-defined Elastic data stream. You can select your dataset and namespace of choice when configuring the integration. For example, if you select `azure.custom` as your dataset, and `default` as your namespace, the integration will send the data to the `logs-azure.custom-default` data stream. From 1f2a92eebc59436ba63433a3186806f2ab67c78f Mon Sep 17 00:00:00 2001 From: Maurizio Branca Date: Wed, 30 Oct 2024 23:49:12 +0100 Subject: [PATCH 18/19] Update docs source --- packages/azure_logs/_dev/build/docs/README.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/packages/azure_logs/_dev/build/docs/README.md b/packages/azure_logs/_dev/build/docs/README.md index adee65dec748..3fc5af6d33b4 100644 --- a/packages/azure_logs/_dev/build/docs/README.md +++ b/packages/azure_logs/_dev/build/docs/README.md @@ -9,9 +9,9 @@ Use the integration to collect logs from: ## Data streams -The Custom Azure Logs Input integration collects one types of data streams: logs. +The Custom Azure Logs Input integration collects one type of data stream: logs. -The integration does not comes with a pre-defined data stream. You can select your dataset and namespace of choice when configuring the integration. +The integration does not use a pre-defined Elastic data stream. You can select your dataset and namespace of choice when configuring the integration. For example, if you select `azure.custom` as your dataset, and `default` as your namespace, the integration will send the data to the `logs-azure.custom-default` data stream. From de5fd1a213e70f67e1372fb6039d7602482577ab Mon Sep 17 00:00:00 2001 From: Maurizio Branca Date: Thu, 31 Oct 2024 09:01:44 +0100 Subject: [PATCH 19/19] Remove "Input" from the package name - it's consistent with the equivalent AWS integration - end users a probably not aware of the input vs. integration --- packages/azure_logs/_dev/build/docs/README.md | 14 +++++++------- packages/azure_logs/changelog.yml | 2 +- packages/azure_logs/docs/README.md | 14 +++++++------- packages/azure_logs/manifest.yml | 2 +- 4 files changed, 16 insertions(+), 16 deletions(-) diff --git a/packages/azure_logs/_dev/build/docs/README.md b/packages/azure_logs/_dev/build/docs/README.md index 3fc5af6d33b4..d9c9de14de77 100644 --- a/packages/azure_logs/_dev/build/docs/README.md +++ b/packages/azure_logs/_dev/build/docs/README.md @@ -1,6 +1,6 @@ -# Custom Azure Logs Input +# Custom Azure Logs -The Custom Azure Logs Input integration collects logs from Azure Event Hub. +The Custom Azure Logs integration collects logs from Azure Event Hub. Use the integration to collect logs from: @@ -9,7 +9,7 @@ Use the integration to collect logs from: ## Data streams -The Custom Azure Logs Input integration collects one type of data stream: logs. +The Custom Azure Logs integration collects one type of data stream: logs. The integration does not use a pre-defined Elastic data stream. You can select your dataset and namespace of choice when configuring the integration. @@ -22,7 +22,7 @@ Custom Logs integrations give you all the flexibility you need to configure the You need Elasticsearch for storing and searching your data and Kibana for visualizing and managing it. You can use our hosted Elasticsearch Service on Elastic Cloud, which is recommended, or self-manage the Elastic Stack on your own hardware. -Before using the Custom Azure Logs Input you will need: +Before using the Custom Azure Logs you will need: * One **event hub** to store in-flight logs exported by Azure services (or other sources) and make them available to Elastic Agent. * A **storage account** to store information about logs consumed by the Elastic Agent. @@ -31,7 +31,7 @@ Before using the Custom Azure Logs Input you will need: [Azure Event Hubs](https://learn.microsoft.com/en-us/azure/event-hubs/event-hubs-about) is a data streaming platform and event ingestion service. It can receive and temporary store millions of events. -Elastic Agent with the Custom Azure Logs Input integration will consume logs from the Event Hubs service. +Elastic Agent with the Custom Azure Logs integration will consume logs from the Event Hubs service. ```text ┌────────────────┐ ┌───────────┐ @@ -46,7 +46,7 @@ To learn more about Event Hubs, refer to [Features and terminology in Azure Even The [Storage account](https://learn.microsoft.com/en-us/azure/storage/common/storage-account-overview) is a versatile Azure service that allows you to store data in various storage types, including blobs, file shares, queues, tables, and disks. -The Custom Azure Logs Input integration requires a Storage account container to work. +The Custom Azure Logs integration requires a Storage account container to work. The integration uses the Storage Account container for checkpointing; it stores data about the Consumer Group (state, position, or offset) and shares it among the Elastic Agents. Sharing such information allows multiple Elastic Agents assigned to the same agent policy to work together; this enables horizontal scaling of the logs processing when required. @@ -63,7 +63,7 @@ The integration uses the Storage Account container for checkpointing; it stores └────────────────┘ ``` -The Elastic Agent automatically creates one container for the Custom Azure Logs Input integration. The Agent will then create one blob for each partition on the event hub. +The Elastic Agent automatically creates one container for the Custom Azure Logs integration. The Agent will then create one blob for each partition on the event hub. For example, if the integration is configured to fetch data from an event hub with four partitions, the Agent will create the following: diff --git a/packages/azure_logs/changelog.yml b/packages/azure_logs/changelog.yml index 11f1f3445d02..42e95ebc7a89 100644 --- a/packages/azure_logs/changelog.yml +++ b/packages/azure_logs/changelog.yml @@ -1,6 +1,6 @@ # newer versions go on top - version: "0.1.0" changes: - - description: Add Custom Azure Logs Input to collect log events from Azure Event Hubs + - description: Add Custom Azure Logs to collect log events from Azure Event Hubs type: enhancement link: https://github.com/elastic/integrations/pull/11552 diff --git a/packages/azure_logs/docs/README.md b/packages/azure_logs/docs/README.md index 3fc5af6d33b4..d9c9de14de77 100644 --- a/packages/azure_logs/docs/README.md +++ b/packages/azure_logs/docs/README.md @@ -1,6 +1,6 @@ -# Custom Azure Logs Input +# Custom Azure Logs -The Custom Azure Logs Input integration collects logs from Azure Event Hub. +The Custom Azure Logs integration collects logs from Azure Event Hub. Use the integration to collect logs from: @@ -9,7 +9,7 @@ Use the integration to collect logs from: ## Data streams -The Custom Azure Logs Input integration collects one type of data stream: logs. +The Custom Azure Logs integration collects one type of data stream: logs. The integration does not use a pre-defined Elastic data stream. You can select your dataset and namespace of choice when configuring the integration. @@ -22,7 +22,7 @@ Custom Logs integrations give you all the flexibility you need to configure the You need Elasticsearch for storing and searching your data and Kibana for visualizing and managing it. You can use our hosted Elasticsearch Service on Elastic Cloud, which is recommended, or self-manage the Elastic Stack on your own hardware. -Before using the Custom Azure Logs Input you will need: +Before using the Custom Azure Logs you will need: * One **event hub** to store in-flight logs exported by Azure services (or other sources) and make them available to Elastic Agent. * A **storage account** to store information about logs consumed by the Elastic Agent. @@ -31,7 +31,7 @@ Before using the Custom Azure Logs Input you will need: [Azure Event Hubs](https://learn.microsoft.com/en-us/azure/event-hubs/event-hubs-about) is a data streaming platform and event ingestion service. It can receive and temporary store millions of events. -Elastic Agent with the Custom Azure Logs Input integration will consume logs from the Event Hubs service. +Elastic Agent with the Custom Azure Logs integration will consume logs from the Event Hubs service. ```text ┌────────────────┐ ┌───────────┐ @@ -46,7 +46,7 @@ To learn more about Event Hubs, refer to [Features and terminology in Azure Even The [Storage account](https://learn.microsoft.com/en-us/azure/storage/common/storage-account-overview) is a versatile Azure service that allows you to store data in various storage types, including blobs, file shares, queues, tables, and disks. -The Custom Azure Logs Input integration requires a Storage account container to work. +The Custom Azure Logs integration requires a Storage account container to work. The integration uses the Storage Account container for checkpointing; it stores data about the Consumer Group (state, position, or offset) and shares it among the Elastic Agents. Sharing such information allows multiple Elastic Agents assigned to the same agent policy to work together; this enables horizontal scaling of the logs processing when required. @@ -63,7 +63,7 @@ The integration uses the Storage Account container for checkpointing; it stores └────────────────┘ ``` -The Elastic Agent automatically creates one container for the Custom Azure Logs Input integration. The Agent will then create one blob for each partition on the event hub. +The Elastic Agent automatically creates one container for the Custom Azure Logs integration. The Agent will then create one blob for each partition on the event hub. For example, if the integration is configured to fetch data from an event hub with four partitions, the Agent will create the following: diff --git a/packages/azure_logs/manifest.yml b/packages/azure_logs/manifest.yml index a9fb2a4aa90e..dc1ffbcf8fd0 100644 --- a/packages/azure_logs/manifest.yml +++ b/packages/azure_logs/manifest.yml @@ -1,6 +1,6 @@ format_version: 3.3.0 name: azure_logs -title: "Custom Azure Logs Input" +title: "Custom Azure Logs" version: 0.1.0 source: license: Elastic-2.0