id int64 1 5.12M | doi stringlengths 10 49 ⌀ | file_id stringlengths 9 36 | url stringlengths 0 108 ⌀ | authors stringlengths 5 33.9k ⌀ | title stringlengths 3 921 ⌀ | score float64 -11 -1.87 | original_file_name stringlengths 12 40 | year float64 1.91k 2.03k ⌀ | chunk_name stringlengths 20 54 | publisher stringclasses 31
values | content stringlengths 24 66.8k ⌀ | journal stringlengths 1 222 ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
1 | null | 6771a1ce-c951-412e-868d-bf833c033b34 | https://www.radioeng.cz/fulltexts/2009/09_02_201_209.pdf | null | Microsoft Word - str_201_209.doc | -9.070313 | 6771a1ce-c951-412e-868d-bf833c033b34.md | null | chunk_6771a1ce-c951-412e-868d-bf833c033b34_5.md | radioeng_cz | # Current and Future Research Trends in Substrate Integrated Waveguide Technology
## 2 Historical Development
### Active SIW Components
SIW technology was also used to implement several active components, thus exploiting the advantage of an easy integration of the active elements with the waveguide components. In particular, a feedback oscillator was proposed in [27]: it operates at 12 GHz and is based on an SIW cavity that acts as a frequency selector as well as a feedback-coupling device. Another topology was adopted in the Ka-Band oscillator proposed in [28], where a Gunn diode is mounted in series with an SIW resonant cavity. An X-band single-balanced SIW mixer was presented in [29]: thanks to the use of an SIW hybrid coupler with excellent performance over a very broad band, the mixer exhibits an operation bandwidth from 8.5 to 12 GHz. A compact X-band single-transistor amplifier with SIW-based input and output matching networks was proposed
Figure 2: Number of publications on the SIW in IEEE journals (source: iececpplore.ieee.org)
in [30]. Conversely, four-device power amplifier operating at 35 GHz was presented in [31]: it exhibits good power-combining efficiency along with a good heat sink. | null |
2 | 10.3390/rs13153046 | 57877ff4-1c5b-418c-80a0-cba887203334 | https://www.mdpi.com/2072-4292/13/15/3046/pdf | Xingxing Li; Hongmin Zhang; Keke Zhang; Yongqiang Yuan; Wei Zhang; Yujie Qin | Earth Rotation Parameters Estimation Using GPS and SLR Measurements to Multiple LEO Satellites | -8.8125 | 57877ff4-1c5b-418c-80a0-cba887203334.md | 2,021 | chunk_57877ff4-1c5b-418c-80a0-cba887203334_8.md | mdpi | # Earth Rotation Parameters Estimation Using GPS and SLR Measurements to Multiple LEO Satellites
## 3 ERP Estimation Based on the Single LEO Satellite
In this section, the quality of the LEO orbit that we calculated is assessed, firstly, by comparison with external products and SLR validation. Then, the influence of LEO orbit improvement on the ERP estimation is assessed. After that, we compare and analyze the ERP solutions based on the different LEO observations, and highlight the differences between these single-LEO solutions. | Remote Sensing |
3 | 10.1049/iet-com.2011.0365 | 00f5af16-7fb9-444c-a899-448b17a33bb2 | https://arxiv.org/pdf/1303.6859 | R.G. Clegg; S. Isam; I. Kanaras; I. Darwazeh | A practical system for improved efficiency in frequency division multiplexed wireless networks | -10.382813 | 00f5af16-7fb9-444c-a899-448b17a33bb2.pdf | 2,012 | arxiv_4_00f5af16-7fb9-444c-a899-448b17a33bb2_22 | arxiv | # A practical system for improved efficiency in frequency division multiplexed wireless networks
## References
* [20] [PERSON], [PERSON], [PERSON], and [PERSON], \"A low complexity suboptimal MIMO receiver: The combined ZF-MLD algorithm,\" _14 th IEEE Proceedings on Personal, Indoor and Mobile Radio Communications, 2003. PIMRC 2003_, vol. 3, pp. 2271-2275 vol.3, 7-10 Sept. 2003. * [21] [PERSON], [PERSON], [PERSON], and [PERSON], \"A combined MMSE-ML detection for a spectrally efficient non orthogonal FDM signal,\" _5 th International Conference on Broadband Communications, Networks and Systems, 2008. BROADNETS 2008._, pp. 421-425, Sept. 2008. * [22] [PERSON] and [PERSON], \"MMSE OFDM and prefixed single carrier systems: BER analysis,\" in _International Conference on Acoustics, Speech, and Signal Processing, 2003, ICASSP '03, IEEE_, Hong Kong, Apr. 2003, pp. 229-232. * [23] [PERSON] and [PERSON], \"A universal lattice code decoder for fading channels,\" _IEEE Transactions on Information Theory_, vol. 45, no. 5, pp. 1639-1642, Jul 1999. * [24] [PERSON], [PERSON], and [PERSON], \"Semidefinite relaxation based multiuser detection for M-ary PSK multiuser systems,\" _IEEE Transactions on Signal Processing_, vol. 52, no. 10, pp. 2862-2872, Oct. 2004. * [25] [PERSON], [PERSON], [PERSON], and [PERSON], \"An Investigation of Semidefinite Programming Detection for a non orthogonal FDM system,\" _20 th Personal, Indoor and Mobile Radio Communications Symposium 2009, IEEE PIMRC'09, Japan, Tokyo_, September 2009. * [26] ----, \"A New Quasi-Optimal Detection Algorithm for a Non Orthogonal Spectrally Efficient FDM,\" in _9 th International Symposium on Communications and Information Technologies 2009, IEEE ISCIT 2009, Incheon, Korea_, September 2009. * [27] [PERSON], [PERSON], [PERSON], and [PERSON], \"A Fast Constrained Sphere Decoder for Ill Conditioned Communication Systems,\" _IEEE Communications Letters_, vol. 14, no. 11, pp. 999-1001, 2010. * [28] [PERSON] and [PERSON], \"Design and Performance Assessment of Fixed Complexity Spectrally Efficient FDM Receivers,\" in _IEEE 73 rd Vehicular Technology Conference (IEEE VTC'11)_, 2011. * [29] [PERSON], [PERSON], and [PERSON], \"A Truncated SVD Approach for Fixed Complexity Spectrally Efficient FDM Receivers,\" in _IEEE Wireless Communications & Networking Conference (IEEE WCNC'11)_, 2011. * [30] [PERSON] and [PERSON], \"On the sphere-decoding algorithm I. Expected complexity,\" _IEEE Transactions on Signal Processing_, vol. 53, no. 8, pp. 2806-2818, Aug. 2005. * [31] [PERSON] and [PERSON], \"On the complexity of sphere decoding in digital communications,\" _IEEE Transactions on Signal Processing_, vol. 53, no. 4, pp. 1474-1484, April 2005. * [32] [PERSON], [PERSON], [PERSON], and [PERSON], \"Joint Channel Equalization and Detection of Spectrally Efficient FDM Signals,\" in _21 th Personal, Indoor and Mobile Radio Communications Symposium 2010, IEEE PIMRC'10,_, Sep 2010. * [33] [PERSON] and [PERSON], \"On the minimum distance problem for faster-than-Nyquist signaling,\" _IEEE Transactions on Information Theory_, vol. 34, no. 6, pp. 1420-1427, Nov 1988. * [34] [PERSON] and [PERSON], \"The two dimensional Mazo limit,\" in _Proc. Int. Symposium on Information Theory_, vol. 1, 2005, pp. 970-974. * [35] ----, \"Multistream Faster than Nyquist Signaling,\" _IEEE Transactions on Communications_, vol. 57, no. 5, pp. 1329-1340, May 2009. * [36] [PERSON] and [PERSON], \"Data Transmission by Frequency-Division Multiplexing Using the Discrete Fourier Transform,\" _IEEE Transactions on Communications,_, vol. 19, no. 5, pp. 628-634, Oct 1971. * [37] [PERSON] and [PERSON], \"Flexible Hardware Architecture of SEFDM Transmitters with Real-Time Non-Orthogonal Adjustment,\" in _IEEE 18 th International Conference on Telecommunications (IEEE ICT'11)_, 2011. * [38] [PERSON], [PERSON], [PERSON], and [PERSON], \"VLSI Architecture for a Reconfigurable Spectrally Efficient FDM Baseband Transmitter,\" in _IEEE International Symposium on Circuits and Systems (IEEE ISCAS'11)_, 2011. | IET Communications |
4 | 10.3390/app15137256 | cc8ddf22-3c05-4cd9-ba45-763dfb94721e | https://www.mdpi.com/2076-3417/15/13/7256/pdf | Yuqiang Zhang; Xuezhe Yao; Wenping Zhang; Zhaopeng Zhu | A Novel Adaptive Transient Model of Gas Invasion Risk Management While Drilling | -10.875 | cc8ddf22-3c05-4cd9-ba45-763dfb94721e.md | 2,025 | chunk_cc8ddf22-3c05-4cd9-ba45-763dfb94721e_21.md | mdpi | # A Novel Adaptive Transient Model of Gas Invasion Risk Management While Drilling
## 4 Comparison and Discussion
### Titrotle Pressure Comparison
Figure 11 shows the variation in throttle pressure with time for three BHP control models in dealing with the gas invasion. At 960 s, a gas invasion warning is issued, and all three controls start regulating the wellbore pressure simultaneously. The PID controller model took a total of 400 s from the start of the gas invasion warning to the end of the gas invasion treatment, and the fuzzy PID control model took a total of 320 s from the gas invasion warning to the end of the gas invasion treatment, while the fuzzy neural network PID control model took only 240 s to handle the gas invasion, which improved the
Figure 10: Control parameter vs. times for PID, fuzzy PID, and fuzzy neural network PID.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline
**Indicators** & **PI** & **PID** & \
\begin{tabular}{c} **Fuzzy** \\ **PID** \\ \end{tabular} &
\begin{tabular}{c} **Fuzzy Neural** \\ **Network PID** \\ \end{tabular} \\ \hline Regulation time (s) & 87 & 53 & 42 & 32 \\ RMSE (MPa) & 0.31 & 0.24 & 0.22 & 0.20 \\ IAE (MPa\(\cdot\)s) & 3250.39 & 1870.05 & 1674.84 & 1487.58 \\ \hline \hline \end{tabular}
\end{table}
Table 8: Comparison of the four controllers.
Figure 9: Effects of various intelligent controllers on BHP.
regulation efficiency by 40% over the PID control model and 25% over the fuzzy PID control model. Therefore, the fuzzy neural network PID can regulate the throttle valve opening and pressure more efficiently, thus increasing the BHP and reducing the gas invasion rate in a short time until the well is successfully pressurized. | Applied Sciences |
5 | 10.34133/2021/9810375 | d7eefe11-bd49-4610-a732-ead10b23aebe | https://spj.science.org/doi/pdf/10.34133/2021/9810375?download=true | Qingtao Wang; Dongping Jin; Xiaoting Rui | Dynamic Simulation of Space Debris Cloud Capture Using the Tethered Net | -9.710938 | d7eefe11-bd49-4610-a732-ead10b23aebe.md | 2,021 | chunk_d7eefe11-bd49-4610-a732-ead10b23aebe_5.md | spj | # Dynamic Simulation of Space Debris Cloud Capture Using the Tethered Net
## 2 System Modeling
As threads woven into the net are usually very thin, it is very suitable to establish the dynamic model of the net undergoing large deformations and large overall motions in space based on the ANCF gradient deficient beam element [19]. Compared to the mass-spring model, the ANCF model exhibits a constant mass matrix of the dynamic system, geometric nonlinearity, zero centrifugal and Coriolis forces, and, consequently, good capability of describing the flexibility of the net [23]. [PERSON] et al. applied the ANCF to model the net by introducing the local coordinate\(\in[\,0\,l\,]\) to obtain the position of an arbitrary point within only one element [30], which is not convenient for the multizone contact detection strategy used in this paper to detect the contact between the net and the debris cloud [19] as in this strategy the detecting point
Figure 1: The numerical and experimental results of debris cloud morphology [41]. may skip from one element to its adjacent one. Moreover, in their work [30], only the elastic forces due to the longitudinal deformation of threads are taken into account. The bending deformation which is very important for contact detection is not considered. So in this paper, the ANCF gradient deficient beam element suitable for the contact detection is introduced, both the longitudinal and bending deformations are taken into consideration, and the elastic force and Jacobians due to these deformations are derived in detail. For the convenience of further discussion on contact detection, as shown in Figure 2, a new body-fixed arc coordinate \(\zeta\) is introduced to describe the global position of an arbitrary point on a thread meshed by the ANCF gradient deficient beam. Thus, the global position vector \(\mathbf{r}\) of an arbitrary point \(P\) on the centerline of the entire thread yields
\[\mathbf{r}(\zeta)=\mathbf{S}\Big{(}\bar{\zeta}\Big{)}\mathbf{e}_{\zeta}, \tag{1}\]
where \(\bar{\zeta}\in[0\quad L]\) is the arc coordinate of the thread and \(L\) is its length in the undeformed configuration. \(\bar{\zeta}\in[1\quad N_{\epsilon}]\) and \(\bar{\zeta}\in[0\quad 1\quad]\) are the integer part and decimal part of the dimensionless value \(\zeta/l\), denoting the element the point \(P\) belongs to and the position on the element the point locates, respectively. \(N_{\epsilon}\) and \(l=L/N_{\epsilon}\) are the total element number used to discretize the thread and element length in the undeformed configuration, respectively. \(\mathbf{e}_{\zeta}\) is the global nodal coordinate vector of the element \(\bar{\zeta}\), composed of the positions and gradient coordinates, as
\[\mathbf{e}_{\zeta}=\Big{[}\mathbf{r}_{\zeta}^{\mathrm{T}}\quad\mathbf{r}_{ \zeta}^{\prime\,\mathrm{T}}\quad\mathbf{r}_{\zeta+1}^{\mathrm{T}}\quad\mathbf{ r}_{\zeta+1}^{\prime\,\mathrm{T}}\Big{]}^{\mathrm{T}}, \tag{2}\]
where both \(\mathbf{r}_{\zeta}\) and \(\mathbf{r}_{\zeta+1}\) are global position vectors of two nodes (red dots in Figure 2) of the element; \(\mathbf{r}_{\zeta}^{\prime}=l\partial\mathbf{r}_{\bar{\zeta}}/\partial\bar{\zeta}\) and \(\mathbf{r}_{\zeta+1}^{\prime}=l\partial\bar{\zeta}_{\zeta+1}/\partial\bar{\zeta}\) are gradient coordinates of the two ends. \(\mathbf{S}(\bar{\zeta})\in\mathrm{R}^{3\times 12}\) is the shape function matrix of the element, expressed as
\[\mathbf{S}=[\,S_{1}\mathbf{I}_{3}\quad S_{2}\mathbf{I}_{3}\quad S_{3}\mathbf{ I}_{3}\quad S_{4}\mathbf{I}_{3}\,], \tag{3}\]
where \(S_{1}=1-3\bar{\zeta}^{2}+2\bar{\zeta}^{3}\), \(S_{2}=l(\bar{\zeta}-2\bar{\zeta}^{2}+\bar{\zeta}^{3})\), \(S_{3}=3\bar{\zeta}^{2}-2\bar{\zeta}^{3}\), and \(S_{4}=l(\bar{\zeta}^{3}-\bar{\zeta}^{2})\); \(\mathbf{I}_{3}\) is the identity matrix of order 3. The constant element mass matrix \(\mathbf{M}^{\epsilon}\) of the thread can be written as [30]
\[\mathbf{M}^{\epsilon}=\rho AI\int_{0}^{1}\mathbf{S}^{\mathrm{T}}\mathbf{S}d \bar{\zeta}, \tag{4}\]
where both \(\rho\) and \(A\) are the density of the thread material and the area of the thread cross-section, respectively. | Space: Science & Technology |
6 | 10.3390/app15147740 | c002c8a8-3b66-43f7-a0cd-b743dca36f0a | https://www.mdpi.com/2076-3417/15/14/7740/pdf | Safia Meteb Al-Nofaie; Sanaa Sharaf; Rania Molla | Design Trends and Comparative Analysis of Lightweight Block Ciphers for IoTs | -10.84375 | c002c8a8-3b66-43f7-a0cd-b743dca36f0a.md | 2,025 | chunk_c002c8a8-3b66-43f7-a0cd-b743dca36f0a_25.md | mdpi | # Design Trends and Comparative Analysis of Lightweight Block Ciphers for IoTs
## 4 Related Work
### Generalized Feistel Network (GFN)
Most notably, its GE count (2180), although small, is higher than some other modern TBCs, such as SKINNY-64-128 (1696 GE) and CRAFT-128 (1193 GE), which were designed from scratch with minimal tweak cost in mind. Additionally, T-TWINE inherits the limitations of TWINE's 64-bit block size, which may not be sufficient for applications requiring larger security margins or higher throughput. The cipher also does not claim security in the chosen tweak and related-key setting, a scenario increasingly relevant in certain side-channel and protocol-driven environments. Therefore, while T-TWINE is excellent for systems needing minimal modification to existing Feistel-based designs, it may not be the best candidate for use cases demanding ultra-low GE or broader adversarial security guarantees. | Applied Sciences |
7 | 10.3390/app9112228 | 351687aa-9fa8-4c64-86ac-d5510584b624 | https://www.mdpi.com/2076-3417/9/11/2228/pdf | Shiue-Der Lu; Hong-Wei Sian; Meng-Hui Wang; Rui-Min Liao | Application of Extension Neural Network with Discrete Wavelet Transform and Parseval’s Theorem for Power Quality Analysis | -10.679688 | 351687aa-9fa8-4c64-86ac-d5510584b624.md | 2,019 | chunk_351687aa-9fa8-4c64-86ac-d5510584b624_13.md | mdpi | ## 3 Results
### Feature Signal Capture
According to the proposed signal feature extraction method, a set of fundamental wave energy feature \(E_{pure}\) is generated from normal power signals by Wavelet Transform and Parseval's theorem, in order to highlight the features of power quality disturbances. The power quality disturbance signals are processed by DWT and Parseval's theorem. The feature \(E_{distortion}\) of the instantaneous signal of a PQD is obtained, from which the fundamental wave energy is deducted, the energy difference \(\Delta E\) is obtained, and the mathematical Equation (19), the power quality disturbance signals in Figure 5a-e are calculated by the aforesaid signal characteristic energy. The obtained wavelet energy eigenvalue curve diagram is shown in Figure 6a-e. The feature curve diagram shows different energy distributions of different power quality disturbance signals, proving that the proposed method can effectively extract the power quality disturbance features.
\[\Delta E=\left[\begin{array}{ccc}E_{distortion}&c_{0}\\ E_{distortion}&d_{1}\\ \vdots\\ E_{distortion}&d_{j-1}\\ \end{array}\right]-\left[\begin{array}{ccc}E_{pure}&c_{0}\\ E_{pure}&d_{1}\\ \vdots\\ E_{pure}&d_{j}\\ \vdots\\ E_{pure}&d_{j-1}\\ \end{array}\right]=\left[\begin{array}{ccc}\Delta E_{c_{0}}\\ \Delta E_{d_{1}}\\ \vdots\\ \Delta E_{d_{j}}\\ \vdots\\ \Delta E_{d_{j-1}}\\ \end{array}\right]. \tag{19}\]
Figure 5: Power system signal with different disturbances: (**a**) power interruption; (**b**) voltage sag; (**c**) voltage swell; (**d**) voltage flicker; (**e**) power harmonics.
Figure 6: \(Cont.\) | Applied Sciences |
8 | 10.3390/s21217060 | e8448e4a-3c7c-4a01-8ac3-e76ce21e855a | https://www.mdpi.com/1424-8220/21/21/7060/pdf | Jatinkumar Patel; Hosam El-Ocla | Energy Efficient Routing Protocol in Sensor Networks Using Genetic Algorithm | -9.695313 | e8448e4a-3c7c-4a01-8ac3-e76ce21e855a.md | 2,021 | chunk_e8448e4a-3c7c-4a01-8ac3-e76ce21e855a_6.md | mdpi | # Energy Efficient Routing Protocol in Sensor Networks Using Genetic Algorithm
## 2 Related Work
Even if it is based on a random search, GA is able to provide a high-quality solution [18]. GA uses the principles of selection and evolution to produce multiple solutions to a given problem [19; 20]. Challenges in wireless networks such as nodes mobility, fading, congestion, collision, etc. have no negative impact on the GA and this is one of its main characteristics. In [21], the authors proposed a GA-based routing algorithm for flying ad-hoc networks (FANETs). The proposed FF considers several parameters including the maximum link bandwidth, the highest network link stability, and the most residual power of the nodes. These parameters are more crucial in FANETs than in other types of ad-hoc networks. However, having several components in the fitness function will negatively impact the convergence speed of the algorithm to select the efficient route. Similarly, in [22] several fitness function components are proposed which will slow down the route selection process. In [23], the authors introduced a FF that is calculated using the shortest route length in addition to the parameters measured in the route discovery phase including the delay for getting the shortest route and the number of available routes. This method consumes an excessive amount of processing time and energy. In [24], authors proposed at FF that suits those applications that have an excessive amount of data transmission and therefore data congestion is a key that should be considered. There are other methods such as in [25] where the proposed FF is complex and the route selection takes an extended time. In [26; 27; 28], it was introduced GA-based methods which require lengthy processing and memory in the source node as every possible route would be involved in the GA regardless of its fitness level. In [29], the authors proposed a routing protocol that considers the network as a directed graph optimizing routes based on the quality of links between every two nodes using a genetic algorithm. In this regard, the link status might be impacted by several factors such as noise. Therefore, with a small change in a link status, the routing protocol should go through the regeneration of routes based on the modified graph. Among the various maximum lifetime routing protocols, one type is called Efficient Power Aware Routing Protocol (EPAR) [30] which is the extension of the DSR protocol. EPAR selects the path that has the largest packet capacity at the smallest consumed packet transmission energy. Due to the mobility of the nodes in sensor networks, EPAR is proposed to find multi-path routes so the delay is taken to discover a new route will not consume time in case of the failure of the current route [17]. Alternatively, if a single intermediate node goes out of the coverage area, then the whole communication can be disturbed owing to topological change which would enlarge the data loss. This would result in an excessive amount of bandwidth usage and this, in turn, increases the number of packet retransmissions representing a low performance. Accordingly, EPAR using Breadth-First Search (BFS) was introduced to select the route with the minimum energy consumption. However, EPAR-BFS faces problems when the size of the network size increases. In this paper, we propose a solution for selecting the energy-efficient route which enhances the mobile node's lifetime. | Sensors |
9 | 10.3390/app13042358 | 11e5ba15-f1d7-454c-bf92-2578068e9dbf | https://www.mdpi.com/2076-3417/13/4/2358/pdf | Katarzyna Regulska; Agnieszka Matera-Witkiewicz; Aleksandra Mikołajczyk; Beata J. Stanisz | The Degradation Product of Ramipril Is Potentially Carcinogenic, Genotoxic and Mutagenic | -10.296875 | 11e5ba15-f1d7-454c-bf92-2578068e9dbf.md | 2,023 | chunk_11e5ba15-f1d7-454c-bf92-2578068e9dbf_25.md | mdpi | ## 4 Discussion
The second stage is deprotonation of the reacting amine, the addition of neutral nitrogen to the carbonyl of the neighboring carboxylic acid to form a tetrahedral intermediate, the escape of water molecules and a final new bond formation. Here, the rate-limiting step of this process is the _cis_/_trans_ transformation of RAM, associated with multiple bond rotations and a consequent high energy consumption, defined by high E\({}_{\text{a}}\) (Table 1, AE\({}_{\text{a}}\) = 174.12 kJ/mol). The subsequent water loss is thermodynamically favorable (as supported by positive AS, Table 1); hence, its rapid progression translates into a high value of the reaction rate constant. At RH 76%, no conformational changes are necessary for water molecules to access the ester bond in RAM; thus, the E\({}_{\text{a}}\) for RAM hydrolysis under humid conditions is relatively low. Our findings are significant from a manufacturing point of view. Employing dry formulation methods would lead to compromised RAM stability, secondary to dry air degradation, which would be faster than that at humid conditions. Furthermore, the impurity profile would be affected, with DKP becoming a major degradant instead of RAM diacid. As a result, on heating and dry-processing, RAM would rapidly cyclize to DKP with all downstream consequences on its clinical and toxicological performance. Therefore, the application of dry procedures for RAM in the industry should be avoided. In the next stage of this study, we decided to assess the impact of the RAM degradation mechanism on its safety. We only focused our interest on DKP since the toxicological data for both RAM and RAM diacid are available in the registration dossiers of commercially available RAM dosage forms. Based on the literature background, which suggested a possible carcinogenic activity of ACE-Is, we performed a preliminary in silico QSAR simulation assessing various oncologic endpoints, i.e., the general carcinogenicity, mutagenicity and genotoxicity of DKP. Mutagenicity refers to the capacity of chemicals to cause changes in DNA sequences, leading to mutations, while genotoxicity causes damage to genomes, i.e., DNA or chromosomes. The genotoxic agents that cause structural chromosomal aberrations are elastogens, while those that cause numerical chromosomal aberrations are aneugens. The main practical difference between mutagenicity and genotoxicity is that mutagens have no threshold level and any exposure to them is hazardous. Thus, they are not allowed in final dosage forms. Genotoxic agents, in turn, act via threshold mechanisms and their low exposure is not always associated with cancer outcome. Thus, their safe concentration must be established and maintained in drug formulations. The results obtained from our QSAR simulations allowed us to classify DKP as a non-mutagen, as the calculated endpoint score (ES\({}_{\text{av}}\) = 0.2) fell within the non-mutagenicity criteria. The reliability of this prediction was high. Despite this, the adopted in silico model suggested other mechanisms of toxicity, i.e., the carcinogenic and genotoxic activity of DKP, but these predictions were not sufficiently reliable (ES\({}_{\text{av}}\) = 0.5 and 0.6, respectively). On this basis, we assumed that DKP is a potential carcinogen that acts via a mechanism unrelated to direct DNA damage (non-mutagen), probably via chromosome damage (genotoxic agent). However, due to the insufficient reliability of the QSAR simulation for the genotoxicity endpoint, follow-up experiments were necessary, either by in vitro micronucleus or by chromosome aberration assay. The in vitro mammalian cell micronucleus test for the genotoxicity assessment of DKP was selected as a follow-up to our QSAR simulations. The study was designed so as to screen for both aneugenic (aneuploidy-inducing) or clastogenic (chromosome-damaging) activity of the investigated compound. To that end, different treatment modes were applied (a short one for elastogens and an extended one for aneugens). | Applied Sciences |
10 | 10.1103/physrevresearch.4.033220 | 49b0f596-7b48-461a-9cf0-d191b2719284 | https://arxiv.org/pdf/2202.06798 | H. Laurell; D. Finkelstein-Shapiro; C. Dittel; C. Guo; R. Demjaha; M. Ammitzböll; R. Weissenbilder; L. Neoričić; S. Luo; M. Gisselbrecht; C. L. Arnold; A. Buchleitner; T. Pullerits; A. L'Huillier; D. Busto | Continuous-variable quantum state tomography of photoelectrons | -9.367188 | 49b0f596-7b48-461a-9cf0-d191b2719284.pdf | 2,022 | arxiv_1_49b0f596-7b48-461a-9cf0-d191b2719284_2 | arxiv | # Continuous variable quantum state tomography of photoelectrons
###### Abstract
We propose a continuous variable quantum state tomography protocol of electrons which result from the ionization of atoms or molecules by the absorption of extreme ultraviolet light pulses. Our protocol is benchmarked against a direct calculation of the quantum state of photoelectrons ejected from helium and argon in the vicinity of a Fano resonance. In the latter case, we furthermore distill ion-photoelectron entanglement due to spin-orbit splitting. This opens new routes towards the investigation of quantum coherence and entanglement properties on the ultrafast timescale. Thanks to the discovery of high order harmonic generation [1; 2], attosecond light sources were developed, enabling the study of electron dynamics with high temporal resolution [3]. By energy-time uncertainty, such attosecond light pulses, with a central frequency in the extreme ultraviolet (XUV) range, have broad spectral widths. Hence, their interaction with matter usually results in a photoionization process where the ejected electron populates a broad distribution of continuum states. The resulting electronic state may be either pure or mixed. The first experimental methods developed for the characterisation of attosecond pulses relied on the coherence of the photoionization process. The Reconstruction of Attosecond Beating By Interference of Two photon transitions (RABBIT) [4], as well as attosecond streaking [5], were initially invented to characterize the temporal properties of attosecond light pulses. The same techniques were then applied to determine time delays in the photoionization process [6; 7]. More recently RABBIT was used to measure the spectral amplitude and phase of photoelectrons in the vicinity of Fano resonances [8; 9; 10]. In general, these characterization methods are readily applicable to pure quantum states. For mixed states of the ejected electrons, which must be described by a density operator rather than by a state vector in Hilbert space, they are unsuitable. Mixed states occur due to several causes, such as decoherence processes or incomplete measurements of entangled particles or degrees of freedom. Decoherence due to interactions with an environment can be neglected on attosecond and few femtosecond time scales. In contrast, strong coupling, e.g., between electronic and nuclear degrees of freedom in molecular systems [11; 12; 13; 14] has the potential to induce mixing when only the electron's (or only the ion's) degrees of freedom are interrogated. For the characterization of mixed quantum states, the gold standard is _quantum state tomography_ (QST) [15]. It aims at reconstructing an unknown state from a series of projective measurements which yield the state density operator - its most general quantum description - and is widely used in quantum optics. QST has also been applied successfully in specific instances of multidimensional spectroscopy, where pairs of coherent pulses are used to extract the populations and coherences of the states [16; 17]. QST has only recently been applied in attosecond science. Trains of photoelectrons have been characterized by discrete variable quantum state tomography, using SQUIRELS (Spectral Quantum Interference for the Regularized Reconstruction of free-Electron States) [18] in the context of electron microscopy, and using Mixed-FROG (Frequency Resolved Optical Gating) [19] for photoelectrons created by absorption of attosecond pulse trains. Both methods rely on a retrieval algorithm to reconstruct the photoelectron quantum state from the measured spectrogram. Here we propose a robust tomography protocol - Kvanttillstands tomogRafi av AttoseKund ElektroNvagpaket (KRAKEN, engl. \"quantum state tomography of attosecond electron wavepackets\") - that can reconstruct the photoelectron's quantum state without relying on a retrieval algorithm. | Physical Review Research |
11 | 10.3390/app14041429 | 7186311e-047f-48df-aa2f-547d143d4ebb | https://www.mdpi.com/2076-3417/14/4/1429/pdf | Antonio Díaz-Soriano; Antonio Ortiz-Mora; David Martínez-Muñoz; Pedro Rodríguez | On the Impact of Wavelength Dependency on Supercontinuum Generation in Photonic Crystal Fibers | -7.007813 | 7186311e-047f-48df-aa2f-547d143d4ebb.md | 2,024 | chunk_7186311e-047f-48df-aa2f-547d143d4ebb_3.md | mdpi | # On the Impact of Wavelength Dependency on Supercontinuum Generation in Photonic Crystal Fibers
## 1 Introduction
In applied optics, the introduction of photonic crystals [1] marked an important milestone in the technological ability to control and manipulate light. The photonic crystal functionality is based on its micrometer or nanometer periodic structure scale, so that electromagnetic waves in the visible range will be affected by effects analogous to those suffered by an electron at a periodic potential. This creates a pattern through which photons cannot propagate in certain directions, with different polarization or for certain wavelengths. This is due to the formation of what are known as photonic band gaps. Their application is quite widespread; it has even recently been shown that they can be used for polarization control in metamaterials [2].
The idea of bringing these microstructures to the fiber optic waveguides field led to the introduction of the photonic crystal fibers (PCFs) [3, 4], which paved the way to a fresh bunch of new research fields for the photonics community. We can study their use as sensors [5], in high-power generation [6], or as fiber lasers [7], to cite some. Among these new applications, supercontinuum generation by means of a PCF emerged as an easy way to cause a significant broadening in the spectral bandwidth of an input pulse propagating through the fiber [8, 9, 10].
Since the fabrication process of a PCF is expensive, it is mandatory to previously simulate the characteristics and performance of the design to be sure it will offer the expected results [11].
The simulation process consists of two clearly differentiated parts: the characterization of the fiber parameters solving the Helmholtz equation and the study with the generalized nonlinear Schrodinger equation (GNLSE) of the propagation of the pulse under the effects of the previously determined parameters [12, 13]. In our work, the Helmholtz equation can be treated as an eigenvalue problem [14], and the GNLSE will be solved using spectral techniques.
From a PCF characterization model, a set of curves that relates dispersion parameters with the light wavelength inside the fiber has been obtained [15]. Once these parameters are known, the GNLSE shows the pulse evolution in the fiber both in the time and spectral domains.
There is nothing new in what has been exposed so far; both are well-known problems that have been solved a long time ago. But, while the characterization of the PCF determines the relation between all its parameter and the corresponding wavelength, it is common practice to assume in the propagation a constant working wavelength, usually associated with the center of the pulse [16]. This is an approximation that could be taken in systems where the spectral broadening of the pulse during the simulation is negligible, but the supercontinuum generation is the totally opposite case to that assumption, because our goal is to broaden the spectrum as much as possible [17].
The aim of this work is to study the influence of the wavelength dependence of non-linear and dispersion parameters on the pulse propagation and its effect on the spectrum. Thus, we analyzed if the impact justifies the implementation of this dependency in the usual methods that solve the GNLSE when supercontinuum generation occurs. As it will be shown in the Conclusions, this is the case.
The structure of the paper is as follows. Section 2 will introduce the equations mentioned before, starting with the PCF parameters' characterization through the Helmholtz equation and their application to the proposed structure; secondly, the GNLSE is presented, taking into account the different effects that it includes and its associated terms, as well as the spectral method used to solve it. The supercontinuum generation will also be presented there. Section 3 collects all the results from the simulations and the comparison data between the constant and wavelength-dependent cases. The conclusions will be presented at the end of the whole study. | Applied Sciences |
12 | 10.3390/app11146467 | 381493cf-5107-4e9d-bef2-e0997a68f3eb | https://www.mdpi.com/2076-3417/11/14/6467/pdf | Uiraquitan Tadeu Gomes; Plácido Rogério Pinheiro; Rommel Dias Saraiva | Dye Schedule Optimization: A Case Study in a Textile Industry | -9.984375 | 381493cf-5107-4e9d-bef2-e0997a68f3eb.md | 2,021 | chunk_381493cf-5107-4e9d-bef2-e0997a68f3eb_4.md | mdpi | # Dye Schedule Optimization: A Case Study in a Textile Industry
## 2 Brazilian Textile Sector
### Manufacture
The textile chain starts transforming the raw material (natural, synthetic fibres and polymers) into yarns and filaments in the spinning mills. Then there can be two flows, first: warping, dyeing, flat weaving, finishing, and finally the clothing phase, and second: knitting, finishing, and finally the clothing and distribution phase. In Figure 1, we can see the macro flow of the chain. Thus, the result of each step constitutes the primary input of the next one. Each stage has its characteristics, with discontinuity between them, subdividing into several operations and elaborating intermediate products.
The dyeing sector stands out as a critical process in operation and with many opportunities for gain. Beyond the technical aspect, there have been concerns about minimizing the negative environmental impact of the dyeing industry. For this production, scheduling has been applied [10] in many process industries for reducing pollutions emissions. Denim manufacturing faces an eco-efficiency challenge concerning sustainability. Alternatives have been studied to develop new dyeing processes that are cleaner, more efficient, faster, cheaper, and easier to apply [8].
The dyeing process of denim type fabrics (used to make jeans) is characterized by dyeing the cotton yarn before weaving [11]. In this process, the most used dye is indigo, whose method remains the same as natural indigo [12].
In the continuous indigo dyeing process, there are three technologies, which are:
* Rope dye
* Slasher dye
* Loop dye | Applied Sciences |
5,000,000 | 10.1002/aelm.202400315 | 9167b62d-5a26-466e-880b-1e98fc7ca718 | https://advanced.onlinelibrary.wiley.com/doi/pdfdirect/10.1002/aelm.202400315 | Zhengpeng Wang; Fei Tang; Fang‐Fang Ren; Hongwei Liang; Xiangyuan Cui; Shijie Xu; Shulin Gu; Rong Zhang; Youdou Zheng; Jiandong Ye | Unraveling Abnormal Thermal Quenching of Sub‐Gap Emission in <i>β</i>‐Ga<sub>2</sub>O<sub>3</sub> | -10.804688 | 9167b62d-5a26-466e-880b-1e98fc7ca718.md | 2,024 | chunk_9167b62d-5a26-466e-880b-1e98fc7ca718_4.md | wiley | ## 2 Advanced
SCIENCE NEWS www.advancedsciencenews.com
Due to the existence of STHs, \(\beta\)-Ga\({}_{2}\)O\({}_{3}\) does not exhibit near band-edge luminescence, and the sub-bandgap emission spectra can be divided into three main emission bands: ultraviolet (UV),[23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35] and green (GR).[36, 37, 38, 39, 40, 41, 42, 43, 44, 45] UV is widely related to STHs or STEs.[30, 31, 32, 33, 34, 35] and BL is considered to be associated with donor-acceptor pair (DAP) recombination between V\({}_{0}\) donor and V\({}_{Ca}\) or V\({}_{Ca}\)-V\({}_{0}\) complexes acceptor[27, 30] Nevertheless, there are divergent opinions about the origin of GR: i) GR is related to DAP recombination involved with dopants like Sn, Ge, and Be[27] and dopants can cause the conversion of BL to GR. ii) GR is related to self-trapped or bound excitons on oxygen atoms.[25] iii) GR is associated with the _nsnp-ns\({}^{2}\)_ transition of Sn\({}^{2+}\)[31]. It is worth mentioning that in the study of temperature-dependent PL spectrum, an unusual phenomenon that the PL intensity increases with the rise of temperature is called negative thermal quenching (NTQ), which has been widely reported in GaAs, ZnS, and ZnO before[40, 41, 42, 43, 44, 45], but never reported in \(\beta\)-Ga\({}_{2}\)O\({}_{3}\) community.
In this work, the temperature-dependent PL spectrum of \(\beta\)-Ga\({}_{2}\)O\({}_{3}\) samples with/without Sn impurities were investigated, where UV emission was observed in all samples, while GR emission appeared only in samples with Sn impurities at cryogenic temperature and shifted to UV emission with elevated temperature. Moreover, we first report the NTQ effect of UV emission, detailed analysis of the emission mechanism and transition dynamics of UV and GR, and propose coordinate configuration diagrams of UV emission that related to the formation of STE and GR emission related to the _nsnp-ns\({}^{2}\)_ transition of Sn\({}^{2+}\). Furthermore, combined with density functional theory (DFT) calculation and semi-classical quantum theory model, we also proposed a possible channel of conversion from GR to UV emission and put forward the corresponding schematic diagram. | Advanced Electronic Materials |
13 | 10.1103/revmodphys.94.035001 | 39fcc2a1-5787-45c7-b973-e9842666eff9 | https://arxiv.org/pdf/2208.10236 | Chao-Yang Lu; Yuan Cao; Cheng-Zhi Peng; Jian-Wei Pan | Micius quantum experiments in space | -7.519531 | 39fcc2a1-5787-45c7-b973-e9842666eff9.pdf | 2,022 | arxiv_4_39fcc2a1-5787-45c7-b973-e9842666eff9_19 | arxiv | # Micius quantum experiments in space
## III Challenges in practical and large-scale applications
### Long distances
Having covered the typical early, small-scale experiments on quantum communications, here we raise the question, \"what limits the distance of quantum communications?\" In both the fiber optics and terrestrial free-space channels, there are inevitable photon losses, which scale exponentially up with the transmission length in optical fibers. Gisin _et al_. highlighted that at 1000 km, even with a perfect single-photon source of 10 GHz, ideal photon detectors, and 0.2 dB/km fiber losses, one would detect only 0.3 photons on average per century (Gisin and Thew, 2010). In classical communications, it is possible to amplify the signal 0 and 1. In contrast, an unknown quantum superposition state cannot be noiselessly amplified. This is known as the quantum no-cloning theorem, a fundamental no-go theorem in quantum mechanics. While it underpins the security of QKD, it excludes the possibility of simply amplifying quantum signals over long-distance quantum communications. | Reviews of Modern Physics |
14 | 10.3390/app10176029 | 884b996f-1e4d-4e0a-b6d9-b16faab1655c | https://www.mdpi.com/2076-3417/10/17/6029/pdf | Hepeng Zheng; Yun Zhang; Yuan Wang; Lifeng Zhang; Jun Peng; Saisai Liu; Aibing Li | Characteristics of Atmospheric Kinetic Energy Spectra during the Intensification of Typhoon Lekima (2019) | -10.460938 | 884b996f-1e4d-4e0a-b6d9-b16faab1655c.md | 2,020 | chunk_884b996f-1e4d-4e0a-b6d9-b16faab1655c_6.md | mdpi | ## 2 Numerical Experiment
### Experimental Design
In this study, we perform a 48-h numerical simulation of Lekima from 0000 UTC 6 August to 0000 UTC 8 August, which involved the period of primary intensification and initial maturity of the Typhoon. The model used here is the Advanced Research version of WRF (ARW-WRF) model, version 3.6. The model domain is duplicately nested through two-way nesting, and the outer domain is centered at (20.75\({}^{\circ}\) N, 127.55\({}^{\circ}\) E) and the inner domain is set to move to keep the simulated TC always locating at the center of the domain. The horizontal grid-point number of the outer and inner domain is 601 \(\times\) 601 and 401 \(\times\) 401, with a horizontal resolution of 7.5 and 2.5 km, respectively. Both model domains have 51 vertical levels, with the top at 25 hPa (z = -25 km).
For both domains, the simulation uses the Thompson microphysics scheme [25], the Yonsei University (YSU) boundary layer scheme [26], and the Rapid Radiative Transfer Model for general circulation models (RRTMG) longwave and shortwave radiation scheme [27]. Rayleigh damping is applied to the vertical velocity in the upper 5 km of the model domains to prevent artificial reflection of gravity waves from the model top [28]. In addition, the Kain-Fritsch cumulus parameterization scheme [29] is used in the outer domain. To provide the initial fields and lateral boundaries for the simulation, the European Center for Medium-Range Weather Forecasts (ECMWF) ERA5 reanalysis data at a horizontal resolution of 0.25\({}^{\circ}\)\(\times\) 0.25\({}^{\circ}\) and a time interval of 6 h are used. All analyses in this study are conducted based on the results from the inner domain, which are output at 1-h interval. | Applied Sciences |
15 | null | f27fa589-a9cb-473f-81d0-bbf18777a3e7 | https://arxiv.org/pdf/hep-ph/0101094 | null | arXiv:hep-ph/0101094v1 10 Jan 2001 | -10.921875 | f27fa589-a9cb-473f-81d0-bbf18777a3e7.pdf | null | arxiv_4_f27fa589-a9cb-473f-81d0-bbf18777a3e7_113 | arxiv | ## Chapter 6 Finite-sample corrections to discrepancy distributions
### 6.2 Scaling limits for the Lego discrepancy
#### Sequences and notation
In the following, we will investigate limits in which the number of bins \(M\) goes to infinity. Note that for each value of \(M\), we have to decide on the values of the volumes \(w_{n}\) of the bins. They clearly have to scale with \(M\), because their sum has to be equal to one. There are, of course, many possible ways for the measures to scale, i.e., many double-sequences \(\{w_{n}^{[M]},1\leq n\leq M,M>0\}\) of positive numbers with
\[\sum_{n=1}^{M}w_{n}^{[M]}\ =\ 1\ \ \ \ \forall\,M>0\qquad\text{and}\qquad\lim_{M \to\infty}\sum_{n=1}^{M}w_{n}^{[M]}\ =\ 1\ . \tag{6.42}\]We, however, want to restrict ourselves to discrepancies in which the relative sizes of the bins stay of the same order, i.e., sequences for which
\[\inf_{n,M}Mw_{n}^{[M]}>0\qquad\text{and}\qquad\sup_{n,M}Mw_{n}^{[M]}<\infty\ . \tag{6.43}\]
It will appear to be appropriate to specify the sequences under consideration by another criterion, which is for example satisfied by the sequences mentioned above. It can be formulated in terms of the objects
\[M_{p}\ :=\ \sum_{n=1}^{M}\left(w_{n}^{[M]}\right)^{1-p}\ \,\quad p\geq 1\ \, \tag{6.44}\]
and is given by the demand that
\[h_{p}\in[1,\infty)\quad\forall\,p\geq 1\ \,\quad\text{where}\quad h_{p}:=\lim_{M \to\infty}\frac{M_{p}}{M^{p}}\ . \tag{6.45}\]
Within the set of sequences we consider, there are those with for which the bins become asymptotically equal, i.e., sequences with
\[w_{n}^{[M]}=\frac{1+\varepsilon_{n}^{[M]}}{M}\qquad\text{with}\qquad\lim_{M\to \infty}\max_{1\leq n\leq M}\left|\varepsilon_{n}^{[M]}\right|=0\ \, \tag{6.46}\]
and \(\varepsilon_{n}^{[M]}>-1\), \(1\leq n\leq M\) of course. They belong to the set of sequences with \(h_{p}=1\ \forall\,p\geq 1\), which will allow for special asymptotic probability distributions.
In the following analysis, we will consider functions of \(M\) and their behavior if \(M\to\infty\). To specify relative behaviors, we will use the symbols \"\(\sim\)\", \"\(\asymp\)\" and \"\(\sim\)\". The first one is used as follows:
\[f_{1}(M)\sim f_{2}(M)\quad\iff\ \lim_{M\to\infty}\frac{f_{1}(M)}{f_{2}(M)}\ =\ 1\ . \tag{6.47}\]
If a limit as above is not necessarily equal to one and not equal to zero, then we use the second symbol:
\[f_{1}(M)\asymp f_{2}(M)\quad\iff\ \ f_{1}(M)\sim cf_{2}(M)\ \,\quad c\in(0, \infty)\ . \tag{6.48}\]
We only use this symbol for those cases in which \(c\
eq 0\). For the cases in which \(c=0\) we use the third symbol:
\[f_{1}(M)\sim f_{2}(M)\quad\iff\ \ \lim_{M\to\infty}\frac{f_{1}(M)}{f_{2}(M)}\ =\ 0\ . \tag{6.49}\]
We will also use the \(O\)-symbol, and do this in the usual sense. We can immediately use the symbols to specify the behavior of \(M_{p}\) with \(M\), for the criterion of Eq.(6.45) tells us that
\[M_{p}\asymp M^{p}\ \, \tag{6.50}\]and that
\[M_{p}\sim M^{p}\quad\text{if}\quad h_{p}=1\ . \tag{6.51}\]
In our formulation, also the number of data points \(N\) runs with \(M\). We will, however, never denote the dependence of \(N\) on \(M\) explicitly and assume that it is clear from now on. Also the upper index at the measures \(w_{n}\) we will omit from now on. | null |
5,000,001 | 10.1049/cmu2.12105 | 8c682880-b00f-40b2-a98a-72fa50cb47ec | https://ietresearch.onlinelibrary.wiley.com/doi/pdfdirect/10.1049/cmu2.12105 | Abolfazl Changizi; Mohammad Javad Emadi | Age‐optimal path planning for finite‐battery UAV‐assisted data dissemination in IoT networks | -9.84375 | 8c682880-b00f-40b2-a98a-72fa50cb47ec.md | 2,021 | chunk_8c682880-b00f-40b2-a98a-72fa50cb47ec_9.md | wiley | # Age-optimal path planning for finite-battery UAV-assisted data dissemination in IoT networks
## 3 Problem Formulation
Thus, the weighted sum-\(\mathrm{AoI}\) for selected devices is formulated as
\[\mathcal{A}_{\pi_{i}}^{k}=\sum_{i=1}^{N}\lambda_{\pi_{i}}\mathcal{A}_{\pi_{i} }^{k}=\sum_{i=1}^{N}\lambda_{\pi_{i}}\sum_{i=i}^{N}\zeta_{\pi_{i},\pi_{i+1}}^ {k}. \tag{12}\]
Let \(g_{k}(i,S)\) denote the minimum weighted cost of path starting from node \(v_{i}\), passing all nodes in the set \(S\) exactly once and returning to the DC \(r_{0}\) in the \(k\)th UAV flight turn. To find the stage-WSHP in the \(k\)th flight turn, \(g_{k}(i,S)\) can be expressed as
\[g_{k}(i,S)= \tag{13}\] \[ \begin{cases}\left(\sum_{s=1}^{N}\lambda_{\pi_{s}}\right)\zeta_{s,i}^{k},&S=\emptyset\\ \min_{\eta\in S}\left\{\left(\sum_{s\in\Delta_{\pi_{s}}^{k}}\lambda_{\pi_{s}} \right)\zeta_{s,i}^{k}+g_{k}(i,S-\{i\})\right\},&S\
eq\emptyset,\end{cases} \]
where \(I_{\pi}^{k}\) denotes all the indices of selected devices in the \(k\)th flight turn. Therefore, the minimum path cost can be attained using
\[\min_{\
u\in\Delta_{\pi}^{k}}g_{k}(i,\mathcal{V}-\{v_{i}\}), \tag{14}\]
where the cost function \(g_{k}(i,\mathcal{V}-\{v_{i}\})\) is calculated by (13) iteratively. According to the above discussion, Algorithm 1 is proposed. To summarise, in each flight turn, we first assign the weight and profit of each device using (\(\mathbb{\hat{\gamma}}\)) and select a few devices. Then using (\(\mathbb{\hat{\gamma}}\)), we calculate the \(\zeta_{s,i}\) between selected devices (for \(i,j\in I_{\pi}^{k}\)). To find the best order of visiting, we calculated (13) for each node \(v_{i}\in I_{\pi}^{k}\) and all the subsets of \(S\subseteq I_{\pi}^{k}-\{v_{i}\}\) and record the values in a table. Finally, the optimum path would be determined using (14). To use the DP, we multiply all the parameters of the KP with a large number to have integer values. As mentioned, we do not consider constraint (\(\mathbb{5d}\)) in the proposed solution. This issue is fixed due to the DP approach used for the first subproblem as the selected devices for all the knapsack capacities less than \(C\) (integer format) can be easily recorded (or at least for some values near \(C\)). Then, the optimal visiting order is determined using an algorithm called the stage-WSHP. If the visiting order needs more energy consumption than the propulsion energy limit or equivalently constraint (\(\mathbb{5d}\)) does not satisfy, it is sufficient to consider the selected devices for knapsack capacity less than \(C\) to satisfy constraint (\(\mathbb{5d}\)). ```
1:The network parameters \((v_{i},\sigma_{i}^{k-1},E_{i}^{k},I_{i}^{k},\lambda_{i})\), the UAV parameters \((V^{\prime},k,E_{moc},P)\) and the channel parameters \((\beta,\alpha,B,\sigma^{2})\) for all \(i\in I\). 2:Age-optimal trajectory \(\mathcal{X}_{\beta}^{k}\) for \(k\)th flight turn, battery use, weighted sum-\(\mathrm{AoI}\)
3:Initialize:
4:Set profits \(\beta_{1}^{k}\) and weights \(\pi_{i}^{k}\) and \(C\) using (\(\mathbb{\hat{\gamma}}\)) for all \(i\in I\). 5:Multiply weights and knapsack capacity by a large number, e.g \(K\), to have integer weights and capacity; \(\pi_{i}^{k}=\sum_{i}\mathcal{A}_{i}^{k}\) and \(C=KC\). 6:Set \(I_{\pi}^{k}=\emptyset\). 7:for\(j=0,...,C\)do
8:\(\pi[0,j]:=0\)
9:endfor
10:for\(i=1,...,M\)do
11:for\(j=0,...,C\)do
12:if\(\pi_{i}^{k}>j\)then
13:\(\pi[i,j]:=\pi[i-1,j]\)
14:\(\Sigma_{k}\)then\((i,j)=0\)
15:else
16:\(\pi[i,j]:=\max\{\pi[i-1,j],\pi[i-1,j-\pi_{i}^{k}]+\beta_{i}^{k}\}\)
17:\(\Sigma_{k}\)then\((i,j)=1\)
18:endif
19:endfor
20:endfor
21:for\(i\in I_{\pi}^{k}\)do
22:for\(S\subseteq\mathcal{V}-\{v_{i}\}\)do
23:Calculate the minimum path cost \(g_{k}(i,S)\) based on (13). 24:endfor
25:endfor
26:Find the first optimal node \(\pi_{i_{1}}^{k}=\arg\min_{\eta\in\mathcal{V}}g_{k}(i,\mathcal{V}-\{v_{i}\})\). 27:Trace back to find the optimal trajectory starting with node \(\pi_{i_{1}}^{k}\) and ending with node \(r_{0}\). 28:Update age of each device using (\(\mathbb{\hat{\gamma}}\)). 29:Calculate \(\sum_{i=1}^{M}\lambda_{i}\mathcal{A}_{i}^{k}\). 30:return Optimal trajectory, \(\pi[M,C]\), weighted sum-\(\mathrm{AoI}\). | IET Communications |
16 | 10.48550/arXiv.1111.0882 | d82331db-adcd-437c-b4de-93e4efd6e7d1 | https://arxiv.org/pdf/1111.0882 | Tiphaine Phe-Neau; Marcelo Dias de Amorim; Vania Conan | Using Neighborhood Beyond One Hop in Disruption-Tolerant Networks | -9.453125 | d82331db-adcd-437c-b4de-93e4efd6e7d1.pdf | 2,011 | arxiv_3_d82331db-adcd-437c-b4de-93e4efd6e7d1_8 | arxiv | # Using Neighborhood Beyond One Hop in Disruption-Tolerant Networks
## IV Cost analysis
#### Iv-A2 Neighborhood Knowledge Overhead (\(N_{o}\))
represents the overhead to gather information about the neighborhood. To get all nodes \(T\)-neighborhood, the basic approach consists in sending epidemic probes with an upper threshold of \(T\). Node \(A\) broadcasts a discovery message (DM) to its contacts with a TTL set to \(T\). All nodes who received the DM rebroadcast this message with a TTL set to \(T-1\), and so on. We assume that each transmission is acknowledged (see Fig. 2 for a detailed example). This leads to a cost of:
\[N_{o}=2\times\text{card}(\text{CC of size}<T)+\text{card}(\text{CC of size}=T)+1, \tag{2}\]
where \"card\" stands for cardinality and \"CC\" for connected component. \(N_{o}\) does not depend on the path length that DM have to cross. With little aggregation, \(N_{o}\) only depends on the number of neighbors in a node's connected component. \(N_{o}\) is responsible for most of the overhead in our analysis as it consists in frequent neighborhood monitoring. | null |
17 | 10.3390/rs5084006 | fcb768b2-d58d-46b1-92b7-ff3aab7a3109 | https://asp-eurasipjournals.springeropen.com/counter/pdf/10.1186/1687-6180-2014-132.pdf | Nasser Najibi; Shuanggen Jin | Physical Reflectivity and Polarization Characteristics for Snow and Ice-Covered Surfaces Interacting with GPS Signals | -8.890625 | fcb768b2-d58d-46b1-92b7-ff3aab7a3109.md | 2,013 | springer_3_fcb768b2-d58d-46b1-92b7-ff3aab7a3109_9.md | springer | ## 4 Measurement campaigns and results
In this section, the results of two experiments performed during the 2013 summer season are discussed:
1. _Piazza d'Armi_, Turin, Italy, 16 July, 2013, antenna in a static position, compact receiver, sandy terrain
2. _Montoro_, Avellino, Italy, 22 August, 2013, moving antenna, PC-based receiver, grass terrain
All the experiments were carried out considering as a target a circular metal disk (28-cm diameter) object. The dimensions of this object are comparable to those of an improvised explosive device or a pressure-activable mine.
A MATLAB tool to predict the positions of all the specular reflection points automatically projected on a Google Earth map for any GPS signal available was developed. The specular reflection points can be found on the basis of the receiver position and the predicted GPS satellite orbits (downloaded from CALSKY website - www.calsky.com - and based on the predicted IGS orbits). Knowledge of the expected positions of available reflections given by this tool was fundamental for the planning of the measurement campaigns. The antenna used was a commercial device, manufactured by Antcom [43]. It is an active L1/L2 RH/LH antenna (PN 4261215), characterized by a HPBW of 140 deg (maximum gain 3.5 dB). The antenna was fixed on a plastic-wood structure in order to perform the measurements at a constant height (3 m) from the ground and in far field conditions. | Remote Sensing |
18 | 10.3390/app14010435 | 5e939390-2a14-46eb-beae-a0a488ed323c | https://www.mdpi.com/2076-3417/14/1/435/pdf | Can Li; Hua Sun; Changhong Wang; Sheng Chen; Xi Liu; Yi Zhang; Na Ren; Deyu Tong | ZWNet: A Deep-Learning-Powered Zero-Watermarking Scheme with High Robustness and Discriminability for Images | -10.09375 | 5e939390-2a14-46eb-beae-a0a488ed323c.md | 2,024 | chunk_5e939390-2a14-46eb-beae-a0a488ed323c_21.md | mdpi | ## 4 Experimental Results and Analysis
### Comparisons with Existing Methods
#### 4.4.1 Robustness
To compare the robustness of ZWNet with [PERSON]'s method, [PERSON]'s method, and [PERSON]'s method, we conducted tests using the same test image, Lena, and subjected them to identical attacks. The results are summarized in Table 4.
From Table 4, it is evident that under the same attack conditions, ZWNet exhibits higher NC values compared to the other methods in most cases. It excels in robustness, with only a slight decrease in performance under rotation attacks. This suggests that the features extracted by the convolutional layer may not be highly robust when it comes to rotation attacks. However, ZWNet still achieves a substantial NC value greater than 0.9 in this scenario, which is more than adequate for copyright identification.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline
**Test Image** & **Lena** & **Mandril** & **Tree** & **Girl** \\ \hline Lena & 0 & 90 & 88 & 113 \\ [PERSON] & 90 & 0 & 92 & 97 \\ Tree & 88 & 92 & 0 & 91 \\ Girl & 113 & 97 & 91 & 0 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Hamming distances between the zero-watermarks of four test images. | Applied Sciences |
19 | 10.3390/app12105044 | f2754c0b-6d34-40c7-88fb-5c8542abcaec | https://www.mdpi.com/2076-3417/12/10/5044/pdf | Xiangfan Wu; Yangyang Guo; Zuzhi Tian; Fangwei Xie; Jinjie Ji; Haopeng Li | Analysis on Flow and Temperature Field of High-Power Magnetorheological Fluid Transmission Device | -10.015625 | f2754c0b-6d34-40c7-88fb-5c8542abcaec.md | 2,022 | chunk_f2754c0b-6d34-40c7-88fb-5c8542abcaec_3.md | mdpi | # Analysis on Flow and Temperature Field of High-Power Magnetorheological Fluid Transmission Device
## 1 Introduction
Magnorheological (MR) fluid is a novel type of intelligent material [1; 2; 3]. MR fluid is composed of soft magnetic particles, carrier fluid and additives [4; 5; 6; 7]. The rheological effect of MR fluid is that it can flow freely without a magnetic field. Under the action of a magnetic field, MR fluid is solidified and can transfer a certain shear force [8; 9; 10; 11]. MR fluid is characterized by a fast response, simple control and strong anti-interference ability [12; 13; 14; 15]. MR transmission technology with MR fluid as a working medium has broad application prospects in the soft start, soft brake and step-less speed regulation of mechanical equipment [16; 17; 18; 19]. An MR transmission device will generate a lot of heat due to wall slip during operation, resulting in the fact that its maximum transmission power is usually restricted by heat dissipating performance. Scholars have carried out a lot of research to investigate the effect of temperature on the transmission characteristics of MR devices and improve the heat dissipation performance of the MR transmission devices. [PERSON] et al. [20] found that when the temperature increases from \(-40\)\({}^{\circ}\)C to \(150\)\({}^{\circ}\)C, the dynamic yield stress and viscosity resistance of MR fluids decrease by 10% and 95%, respectively. [PERSON] et al. [21] established a thermal analysis model of MR Damper, and tested the relationship between temperature and damping force. Results show that the damping force decreases sharply with the increase in temperature. Through experimental studies, [PERSON] et al. [22] found that the increase in temperature had a negative impact on the durability of MR fluids, which would lead to significant attenuation of transmission torque. [PERSON] et al. [23] proposed a multidisk MR fluid actuator. Simulation and experiment results show that the temperature of MR fluid increases linearly with the time in the slip and loaded states, and the greater the slip power is, the faster the temperature rises. [PERSON] et al. [24] studied the influence of temperature on the properties of MR fluid, and the experiment showed that shear yield stress decreased with the increase in temperature. [PERSON] et al. [25] investigated the effect of temperature on the torque transmission characteristics of an MR experimentally. Results show that compared to 20 \({}^{\circ}\)C, the transmitted torque was reduced by about 20% at 80 \({}^{\circ}\)C. [PERSON] et al. [26] investigated the influence of temperature on the stability of torque transmission by MR fluid. [PERSON] et al. [27] formulated a theoretical model for slip differential heat between two neighboring particles of MRFs in shear and squeeze modes. The proposed model can satisfactorily describe the main micro-characteristics of the slip differential heat of MRFs in shear and squeeze modes. [PERSON] et al. [28] investigated the temperature effect of MR fluid on performance while the damper is working. Results show that the saturation magnetization of the particles was reduced by 57% at higher temperatures (127 \({}^{\circ}\)C). [PERSON] et al. [29] analyzed the heating and cooling mechanism of an MR clutch and proposed an auxiliary clutch cooling liquid cooling method. The results show that the increase in temperature leads to the decrease in viscous torque and total output torque. [PERSON] et al. [30] put forward the method of internal circulation cooling. The coolant can rotate with the rotating sleeve, and the coolant can further flow into the brake near the MR fluid, thereby achieving an improved cooling effect. [PERSON] et al. [31] chose to add an aluminum foil bubble insulation material with a low thermal conductivity in the cavity between the electromagnetic coil and the MRF to avoid rapid temperature rise. Results show that the rate of increase in the MRF temperature in the working area of the damper with the insulation material could be reduced by 57.4%. | Applied Sciences |
20 | 10.3390/app15137517 | be29e89a-92f3-4a7d-b5a2-661bc5fe0cb6 | https://www.mdpi.com/2076-3417/15/13/7517/pdf | Xianfeng Zhang; Bin Hu; Shukan Liu; Qiao Sun; Lin Chen | AttenFlow: Context-Aware Architecture with Consensus-Based Retrieval and Graph Attention for Automated Document Processing | -8.21875 | be29e89a-92f3-4a7d-b5a2-661bc5fe0cb6.md | 2,025 | chunk_be29e89a-92f3-4a7d-b5a2-661bc5fe0cb6_12.md | mdpi | ## 3 Method
### Retriever Consensus Confidence Fusion (RCCF)
#### 3.3.1 Retriever Consensus Measurement
The foundation of RCCF lies in accurately measuring consensus among different retrievers. For a query \(q\) and document \(d\), given \(n\) retrievers \(R_{1},R_{2},\ldots,R_{n}\), we define comprehensive consensus measures that capture both ranking and scoring agreement patterns.
**Rank Consistency:** The rank consistency measures how similarly different retrievers rank a document:
\[C_{\text{rank}}(d)=1-\frac{\sigma_{\text{rank}}(d)}{\text{max}(\sigma_{\text{ rank}})}\times\frac{1}{\sqrt{|A(d)|}} \tag{3}\]
where \(\sigma_{\text{rank}}(d)\) is the standard deviation of document \(d\)'s ranks across retrievers, \(\text{max}(\sigma_{\text{rank}})\) is the maximum rank standard deviation among all candidate documents for normalization, \(|A(d)|\) is the number of retrievers that return document \(d\), and \(\frac{1}{\sqrt{|A(d)|}}\) is a penalty factor that reduces consistency scores when fewer retrievers return the document.
**Score Consistency:** Due to the significant scale differences between retriever scores, we first normalize scores for each retriever:
\[s_{\text{norm},i}(d)=\frac{s_{i}(d)-\text{min}(S_{i})}{\text{max}(S_{i})- \text{min}(S_{i})} \tag{4}\]
where \(s_{i}(d)\) is the original score from retriever \(i\) for document \(d\), and \(S_{i}\) is the set of all scores returned by retriever \(i\). The score consistency is then computed as follows:
\[C_{\text{score}}(d)=1-\frac{\sigma_{\text{score}}(d)}{1+\sigma_{\text{score }}(d)}\times\frac{1}{\sqrt{|A(d)|}} \tag{5}\]
where \(\sigma_{\text{score}}(d)\) is the standard deviation of normalized scores for document \(d\).
**Overall Consensus:** The final consensus measure combines both ranking and scoring consistency:
\[C(d)=\alpha\cdot C_{\text{rank}}(d)+(1-\alpha)\cdot C_{\text{score}}(d) \tag{6}\]
where \(\alpha\in[0,1]\) balances the importance of rank versus score consistency. | Applied Sciences |
21 | 10.3390/biomedicines8090362 | a015bbb2-271f-435f-83b2-9bbbcc4af168 | https://www.mdpi.com/2227-9059/8/9/362/pdf | Nicholas Bragagnolo; Christina Rodriguez; Naveed Samari-Kermani; Alice Fours; Mahboubeh Korouzhdehi; Rachel Lysenko; Gerald F. Audette | Protein Dynamics in F-like Bacterial Conjugation | -9.09375 | a015bbb2-271f-435f-83b2-9bbbcc4af168.md | 2,020 | chunk_a015bbb2-271f-435f-83b2-9bbbcc4af168_15.md | mdpi | # Protein Dynamics in F-like Bacterial Conjugation
## 3 Structures Involved in Pilin Processing, Pilus Extension, and Retraction
### Pilus Retraction--TraH and TrbI
TraH is a cysteine-rich protein unique to F-like systems, with a size of 458 aa that is processed to 433 aa for localization, and contains a C-terminal coiled-coil domain known as a motif for oligomerization or other PPIs [14]. This motif is predicted to mediate the interaction complex between TraF, -W, -U, TrbB and -I; TraH directly interacts with TraF, TrbI and the mating pair stabilization (Mps) protein TraU [85]. TraH also contains three N-terminal hydrophobic domains of approximately 20 aa each (the first 25 aa being the cleaved signal peptide), which aligns with results showing that TraH, -F, -U and -W are bound in the OM in the context of the complex [87]. Disulfide bond formation within TraH performed by DsbA and isomerization by TrbB are important for the proper activity of TraH as shown through mutational assays. Interactions with TrbI occur at conserved sequences in the TraH N-terminus, specifically Gly193-Leu226, which also contains a Walker A motif; TraH exhibits no NTPase activity despite this characteristic motif [85; 87]. As TraH has many interacting partners its role in the T4 SS is difficult to ascertain; the mutation of _traH_ appears to affect both pilus elongation and retraction [99].
Pilus retraction is still a major point of intrigue as it is hypothesized that it occurs in an energy independent manner [11]. Once initiated, F-pili retraction occurs at an average of 15.8 nm/s, which is less than half the mean extension rate of 39.5 nm/s [70]. Trbl is a bitopic, 128 aa IM protein that plays a role in the retraction process by interacting with the periplasmic OM-associated complex composed of TraF, -H, -U, and -W [55]. Trbl spans the IM by an N-terminal anchor from residues His17\(-\)Val40, with the remaining 88 residues forming a hydrophilic domain in the periplasmic space [85; 96]. Several proteins of the T4 SS that are localized to the periplasmic space express conserved cysteine residues, homologs of TrbI tend to express a single conserved cysteine residue [14]. Mutations in \(trbl\) were not observed to affect pilus production or DNA transfer efficiency; however, it was observed that some mutations will cause abnormally long pili, while excess TrbI has also been observed to have an effect on male-specific phage sensitivity [64]. Both single-stranded DNA and RNA phage infections are inhibited by excess TrbI [96]. Both examples support the hypothesis that TrbI functions solely in the regulation of retraction. As previously noted, TrbI can directly interact with TraH, and this interaction is considered to initiate pilus retraction [85]. If TrbI localizes in the IM and TraH assembles into the OM when in the context of the T4 SS complex, the TrbI:TraH pair could be part of a second envelope-spanning structure analogous to the TraB, -K, -V core complex [87] | Biomedicines |
22 | 10.3390/a17120551 | 36488ebb-ec5f-4c4d-ba91-e36cb94535bf | https://www.mdpi.com/1999-4893/17/12/551/pdf | Alaa E. Abdel-Hakim; Abdel-Monem M. Ibrahim; Kheir Eddine Bouazza; Wael Deabes; Abdel-Rahman Hedar | Ellipsoidal K-Means: An Automatic Clustering Approach for Non-Uniform Data Distributions | -10.625 | 36488ebb-ec5f-4c4d-ba91-e36cb94535bf.md | 2,024 | chunk_36488ebb-ec5f-4c4d-ba91-e36cb94535bf_14.md | mdpi | # Ellipsoidal K-Means: An Automatic Clustering Approach for Non-Uniform Data Distributions
## 3 Integrating Adaptive Ellipsoidal Clustering with Simulated Annealing Heuristic
### SAELLC Algorithm
The SAELLC algorithm, which is shown in Algorithm 2, aims to find the optimal centers for a given number of clusters \(K\). It ensures that the epoch length is sufficient to identify the best centers by utilizing split and merge procedures.
```
Initialization: Let variables \(K_{\min}\) and \(K_{\max}\) indicate, respectively, the initial minimum and the maximum number of clusters; Choose \(K\) randomly from \([K_{\min},K_{\max}]\) clusters as initial centers \(c_{k},k=1,\ldots,K\); Choose the cooling schedule parameters: initial temperature \(T_{\max}\), final temperature \(T_{\min}\), cooling rate \(\lambda\in(0,1)\), \(M\) iterations number of epoch length; Set \(T:=T_{\max}\) and counter \(p:=1\); Main Loop: while\(T>T_{\min}\)do Epoch Loop: for\(p=1\)to\(M\)do Compute a trial solution of centers as \(\tilde{c}_{k}^{p}=c_{k}^{p}+\Delta c_{k}^{p}\) with the step size \(\Delta\). Calculate the new objective function value \(f(c_{p}^{p})\) using the _ESF-NN_ function as in Equation (21); if\(f(\tilde{c}_{k}^{p})>f(c_{k}^{p})\)then \(c_{k}^{p}:=\tilde{c}_{k}^{p}\); end if else Compute \(\Delta f=f(c_{k}^{p})-f(\tilde{c}_{k}^{p})\); Generate a random number \(r\in[0,1]\); if\(r<e^{-\frac{\Delta f}{T}}\)then \(c_{k}^{p}:=\tilde{c}_{k}^{p}\); end if end while end while Splitting and Merging: if\(p=1\)then Randomly apply cluster Merging or Splitting Procedure 1, 2; end if else Comparison between results obtained in the previous two main loops; Let \(f_{v}\) be the best obtained centers, where \(\
u\in(p,p+1)\); if\(K_{v}\) is the largest number of clustersthen Apply Splitting Procedure 2; end if else Merge the smallest number of clusters \(K_{v}\) using Merging Procedure 1; end if end Decrease temperature: \(T:=\lambda T\), \(p:=p+1\); end if Set \(f_{\text{best}},x_{\text{best}}\), and \(K_{\text{best}}\) to the optimal values determined by the algorithm;
```
**Algorithm 2**SAELLC Algorithm | Algorithms |
23 | 10.1186/s40623-022-01671-w | 125908b6-15dc-4236-a340-dae62e488dbb | https://earth-planets-space.springeropen.com/counter/pdf/10.1186/s40623-022-01671-w.pdf | Sigrid Böhm; Johannes Böhm; Jakob Gruber; Lisa Kern; Jamie McCallum; Lucia McCallum; Tiege McCarthy; Jonathan Quick; Matthias Schartner | Probing a southern hemisphere VLBI Intensive baseline configuration for UT1 determination | -8.757813 | 125908b6-15dc-4236-a340-dae62e488dbb.md | 2,022 | springer_3_125908b6-15dc-4236-a340-dae62e488dbb_14.md | springer | # Probing a southern hemisphere VLBI Intensive baseline configuration for UT1 determination
## Scheduling and simulation
### Validation against UT1\(-\)UTC results from other Intensive sessions
We selected INT1/3 solutions from four external AC, namely from the Federal Agency for Cartography and Geodesy in Germany (bkg), NASA Goddard Space Flight Center (gsf), Paris Observatory (opa) and the United States Naval Observatory (usn). The choice of AC was made depending on the complete availability of the UT1\(-\)UTC values for the same 64 sessions as included in the INT1/3 vie solution. The differences between UT1\(-\)UTC values are assessed by means of weighted mean (WMEAN) and weighted standard deviation (WSTD), which are also built w.r.t. the C04 series. The precision of the different series is compared via the mean formal errors (MFE). Additionally, we employed another alternative indicator for the accuracy of the UT1\(-\)UTC series, calculated with the so-called three-cornered hat method (3 CH). This method was primarily developed by [PERSON] and [PERSON] (1974) to investigate the random errors of atomic clocks. In its simplest form it can be used for the estimation of the error variances of three independent time series supposed to measure the same physical quantity. In our case, we only dispose of three networks measuring UT1\(-\)UTC independently, albeit minute correlations might be generated when there is an overlap of the radio sources observed. We suppose that the variances of the three independent data sets SI, INT1/3 (=INT), and RI are related in the following way:
\[ \begin{split}\sigma^{2}_{\text{SI$-$INT}}&=\sigma^ {2}_{\text{SI}}+\sigma^{2}_{\text{INT}}\,\\ \sigma^{2}_{\text{SI$-$RI}}&=\sigma^{2}_{\text{SI}}+ \sigma^{2}_{\text{RI}}\,\\ \sigma^{2}_{\text{INT$-$RI}}&=\sigma^{2}_{\text{INT }}+\sigma^{2}_{\text{RI}}\.\end{split} \tag{4}\]
Figure 8: Heatmap of comparison parameters for different UT1\(-\)UTC time series. Right triangular matrix: weighted means (WMEAN) of UT1\(-\)UTC time series differences, sign convention is row minus column. Left triangular matrix: weighted standard deviations (WSTD) of UT1\(-\)UTC time series differences. Bottom: UT1\(-\)UTC accuracy from three-cornered hat analysis (3 CH) and UT1\(-\)UTC mean formal errors (MFE) of the different time series corresponding to the column labels. The colours visualise the range in addition to the values printed in the individual fields. All numbers are given in \(\upmu\)s
Consequently, we can derive the single variances by recombination:
\[ \begin{split}\sigma_{\rm SI}^{2}&=(\sigma_{\rm SI-INT}^{ 2}+\sigma_{\rm SI-RI}^{2}-\sigma_{\rm INT-RI}^{2})/2\,\\ \sigma_{\rm INT}^{2}&=(\sigma_{\rm SI-INT}^{2}+ \sigma_{\rm INT-RI}^{2}-\sigma_{\rm SI-RI}^{2})/2\,\\ \sigma_{\rm RI}^{2}&=(\sigma_{\rm SI-RI}^{2}+ \sigma_{\rm INT-RI}^{2}-\sigma_{\rm SI-INT}^{2})/2\.\end{split} \tag{5}\]
The square roots of the variances (hereafter designated 3 CH) reflect the accuracy of the individual series, independently of the respective adjustment model used in the estimation process. The index INT stands for any of the INT1/3 solutions of the five AC (vie, bkg, gsf, opa, usn). Equation (5) can therefore be evaluated five times for SI and RI. Hence the 3 CH figures for SI and RI are calculated as mean values from the evaluations with the five AC difference variances. A multi-solution 3 CH analysis including all INT solutions would fail (by yielding negative variances) due to the high correlations, stemming from the identical observation setup. The numbers for all comparison measures mentioned before (WMEAN, WSTD, 3 CH, MFE) are compactly presented as a kind of heatmap in Fig. 8. The biases, expressed as WMEAN, of the SI to the different INT\(1/3\) solutions are rather small, with an absolute maximum of \(7\)\(\upmu\)s. Biases between the different AC solutions show larger numbers, up to 15 \(\upmu\)s in absolute values. The WMEAN of SI minus C04 is 11 \(\upmu\)s, which is more than SI\(-\)INT\(1/3\), but well in the range of the WMEAN of the other Intensive solutions w.r.t. C04. | Earth, Planets and Space |
24 | 10.3390/app122211653 | 1d61d1a9-000e-4366-bff6-f85d543e49ce | https://www.mdpi.com/2076-3417/12/22/11653/pdf | Min Ma; Qiong Cao; Xiaoyang Liu | A Graph Convolution Collaborative Filtering Integrating Social Relations Recommendation Method | -10.15625 | 1d61d1a9-000e-4366-bff6-f85d543e49ce.md | 2,022 | chunk_1d61d1a9-000e-4366-bff6-f85d543e49ce_24.md | mdpi | # A Graph Convolution Collaborative Filtering Integrating Social Relations Recommendation Method
## 6 Conclusions
In this work, we proposed a new social recommendation method which leverages graph convolution technique and integrates social relations. Firstly, we construct the architecture of a general collaborative filtering social recommendation model based on graph convolution (SRGCF), which consists of four parts, which are initialization embedding layer, semantic aggregation layer, semantic fusion layer and prediction layer, respectively. The semantic aggregation layer and semantic fusion layer are the core of SRGCF, which play the role of extracting high-order semantic information and integrating various semantic information, respectively. Then, we propose a feasible SRRA algorithm on top of the architecture, which can model interactions as well as social relations. It can use richer social information to mine the potential relationship, so as to improve the performance of recommendations. Comparative experiments on four datasets have proven the effectiveness of the proposed model. Different from previous work, we try to explore how to use graph neural network method and introduce social auxiliary information to construct recommendation model in order to learn better representation. The graph-based model is superior to the traditional recommendation model because it can learn not only the representation of entities but also the relationships between them. However, limited by the shortcomings of graph neural network itself, such as excessive smoothing after several iterations, entity representation may not be fully learned, which requires some optimization in model design. In the future, we plan to optimize the model architecture by increasing the coupling between social modeling and interactive modeling, so that the representation learning is more adequate. We will also try to explore the advantages of other graphical representation learning techniques to improve the learning ability of the model. Conceptualization, M.M. ; methodology, X.L. ; software, Q.C. ; formal analysis, Q.C. ; writing--original draft preparation, Q.C. ; writing--review and editing, Q.C. All authors have read and agreed to the published version of the manuscript. Figure 8: Effect of embedding size \(d\) on Gowalla and Epinions. **Funding:** This research was funded by Science and Technology Research Project of Chongqing Municipal Education Commission, grant number KJZDK202001101; Chongqing Postgraduate Research Innovation Project, grant number gzlcx20223205; General Project of Chongqing Natural Science Foundation, grant number cstc2021 jcyj-msxmX0162; 2021 National Education Examination Research Project, grant number GJK2021028. **Institutional Review Board Statement:** Not applicable. **Informed Consent Statement:** Not applicable. **Data Availability Statement:** The calculated data presented in this work are available from the corresponding authors upon reasonable request. **Acknowledgments:** The author would like to thank the anonymous reviewers for their valuable comments on our paper. **Conflicts of Interest:** The authors declare no conflict of interest. **Nomenclature** | Applied Sciences |
25 | 10.1186/s40623-024-02045-0 | d63bd320-996e-423d-910e-470f405f744a | https://earth-planets-space.springeropen.com/counter/pdf/10.1186/s40623-024-02045-0.pdf | Toyese Tunde Ayorinde; Cristiano Max Wrasse; Hisao Takahashi; Diego Barros; Cosme Alexandre Oliveira Barros Figueiredo; Ligia Alves da silva; Anderson Vestena Bilibio | Investigation of the long-term variation of gravity waves over South America using empirical orthogonal function analysis | -9.273438 | d63bd320-996e-423d-910e-470f405f744a.md | 2,024 | springer_5_d63bd320-996e-423d-910e-470f405f744a_3.md | springer | ## 2 Methodology
We used continuous TIMED/SABER temperature profile data from January 2002 to December 2021. The GWs characteristics were extracted from satellite-observed temperature profiles (\(T\)) using scale separation. The main concept is to separate the small-scale perturbations (\(T\)/) from the background temperature (\(\overline{T}\)). The \(T\prime\) profile or the post-processed (e.g., band-pass filtering or spectral decomposition and reconstruction) GWs properties can be determined ([PERSON] et al., 2002; [PERSON] et al., 2004, 2011, 2018; [PERSON] et al., 2008; [PERSON] et al., 2014). To calculate the GWs parameters, first, we found the mean of all the raw temperature profiles within every \(10^{\rm o}\times 10^{\rm o}\) latitude by longitude cell in all the selected pairs to get an effective background temperature (\(T\)) ([PERSON] et al., 2013; [PERSON] et al., 2023). We used the maximum overlap discrete wavelet transform (MODWT) multiresolution analysis (MRA) ([PERSON], 2008) to decompose the mean temperature profiles in a cell. MRA is the process of decomposing a signal into constituent parts that, when combined, provide the original signal perfectly. The decomposition of the signal is crucial for its usefulness in data analysis. The wavelet MRA separates the signal components using fixed functions known as wavelets or MODWT. Utilizing scaled waveform, MODWT may effectively detect local non-periodic patterns and signal singularities and define signal structures by measuring signal fluctuations while concurrently assessing the signal's temporal and scaling features. To extract patterns from the data, wavelet filtering is thus more effective than the conventional linear transport frequency filters ([PERSON] and [PERSON], 2000; [PERSON] and [PERSON], 2005). To extract the GWs from the temperature profiles measured by satellites, MRA wavelet decomposition was used to decompose the raw temperature profile (\(T_{\rm raw}\)) to extract the background temperature (\(T\)) as shown below:
\[T_{\rm res}=T_{\rm raw}-\overline{T} \tag{1}\]
where \(T_{\rm res}\) is the small-scale temperature fluctuations caused by gravity wave activities, respectively. The primary goal is to distinguish between large-scale waves (such as planetary waves and tides) and small-scale fluctuations (\(T_{\rm res}\)), as well as background temperature ([PERSON] et al., 2022). Further, to remove noise and to derive the vertical wavelength (\(\dot{\
u}_{\rm v}\)), we applied the continuous wavelet transform (CWT) ([PERSON] and [PERSON], 1998) to each \(T_{\rm res}\), given as
\[T\prime(h,\dot{\
u}_{\rm v})=\rm CWT[Tres], \tag{2}\]
Then we applied the inverse CWT to each \(T\prime(h,\dot{\
u}_{\rm v})\) and restricted the \(\dot{\
u}_{\rm v}\) to a range of 5-30 km to derive the temperature fluctuation (\(T\prime\)), remove noise, and obtain an altitude dependent \(\dot{\
u}_{\rm v}\). Restricting the vertical wavelength will also enable us to remove waves that are not GWs ([PERSON] et al., 2017; [PERSON] and [PERSON], 2018). We estimated the \(T_{\rm amp}\) using the \(T\prime\) and its Hilbert transform (\(H(T\prime)\)) following ([PERSON] et al., 2017; [PERSON] and [PERSON], 2018; [PERSON] et al., 2022) and is given as:
\[T_{\rm amp}=[(T^{\prime})^{2}+[H(T^{\prime})^{2}]^{\frac{1}{2}}. \tag{3}\]
The Ep is therefore estimated using Eq. (4) below,
\[E_{p}=\Big{(}\frac{g}{N}\Big{)}^{2}\bigg{(}\frac{T_{\rm amp}}{\overline{T}} \bigg{)}^{2}, \tag{4}\]where \(g\) is the acceleration due to gravity, \(N\) is the Brunt-Vaisala frequency, and \(\overline{T}\), and \(T\prime\) are background temperature and the temperature fluctuations caused by gravity wave activities, respectively. \(T_{\rm amp}\) in Eq. (4) is the same as Eq. (3), which is the temperature amplitude. The Ep calculation is based on the accurate extraction of \(T\prime\) in Eq. | Earth, Planets and Space |
5,000,002 | 10.1155/2018/9575281 | c1daad86-0631-4548-ba7d-8c5927506a07 | https://onlinelibrary.wiley.com/doi/pdfdirect/10.1155/2018/9575281 | Zelalem Hailu Gebeyehu; Philip Kibet Langat; Ciira Wa Maina | BER Performance of Stratified ACO-OFDM for Optical Wireless Communications over Multipath Channel | -9.359375 | c1daad86-0631-4548-ba7d-8c5927506a07.md | 2,018 | chunk_c1daad86-0631-4548-ba7d-8c5927506a07_18.md | wiley | # BER Performance of Stratified ACO-OFDM for Optical Wireless Communications over Multipath Channel
## 4 Theoretical BER Bound of STACO-OFDM
### BER Interms of Optical Power of Transmitted Signal
The SNR of the symbol received at the \(k\)th subcarrier is then given by [7, 15]
\[{\rm SNR}_{k}=\frac{E\Big{[}\big{|}Y_{k}^{il}\Big{|}^{2}\Big{]}}{\sigma_{z}^{ 2}}=\frac{2^{2l-1}\pi\big{(}P_{l}^{o}\big{)}^{2}\big{|}H_{k}\big{|}^{2}}{N_{0} B_{\rm sc}N},\quad l=1,2,\ldots,S. \tag{58}\]
But for more simplicity, writing \({\rm SNR}_{k}\) in terms of the total average optical power of the combined signal is vital. The average optical power \(P_{o}^{\rm avg}\) of the combined signal \(x_{T}\left(t\right)\) can be given by [11, 21]
\[P_{o}^{\rm avg}=E\Big{[}x_{T}\big{[}n\big{]}\Big{]}=E\Bigg{[}\sum_{l=1}^{S}x_ {l}\big{[}n\big{]}\Bigg{]}=\sum_{l=1}^{S}\frac{\sigma_{l}}{\sqrt{\pi 2^{l}}} \tag{59}\]
The optical power penalty of the combined system with respect to the optical power at the \(l\)th stratum can be written as follows:
\[\alpha_{l}^{o}=\frac{\big{(}P_{o}^{\rm avg}\big{)}^{2}}{\big{(}P_{l}^{o}\big{)} ^{2}},\quad l=1,2,\ldots,S. \tag{60}\]
Therefore, \({\rm SNR}_{k}\) can be written in terms of the overall transmitted optical power of STACO-OFDM system as follows:
\[{\rm SNR}_{k}=\frac{2^{2l-1}\pi\big{(}P_{o}^{\rm avg}\big{)}^{2}\big{|}H_{k} \big{|}^{2}}{N_{0}B_{\rm sc}N\alpha_{l}^{o}},\quad l=1,2,\ldots,S. \tag{61}\]
The SNR per bit \(\gamma_{l,k}\) can also be calculated as
\[\gamma_{l,k}=\frac{{\rm SNR}_{l,k}}{\log_{2}M_{l}}=\frac{2^{2l-1}\pi\big{(}P_ {o}^{\rm avg}\big{)}^{2}\big{|}H_{k}\big{|}^{2}}{N_{0}B_{\rm sc}N\alpha_{l}^{o }\big{(}\log_{2}M_{l}\big{)}}\quad l=1,2,\ldots,S. \tag{62}\]
Then, the BER at the \(k\)th subcarrier of the \(l\)th stratum becomes [22]
\[{\rm BER}_{l,k}=\frac{4}{\log_{2}M_{l}}\Bigg{(}1-\frac{1}{\sqrt{M_{l}}}\Bigg{)} \sum_{i=1}^{\sqrt{M_{l}}/2}Q\Bigg{(}\left(2i-1\right)\sqrt{\frac{3\big{(}2^{2 l-1}\big{)}\pi\big{(}P_{o}^{\rm avg}\big{)}^{2}\big{|}H_{k}\big{|}^{2}}{\big{(}M_{l}-1 \big{)}\big{(}B_{\rm sc}N_{0}N\alpha_{l}^{o}\big{)}}}\Bigg{)}. \tag{63}\]
The total theoretical BER bound of the \(l\)th stratum is then calculated by using the same formula as (46):
\[{\rm BER}_{l}=\frac{1}{N_{l}^{\rm info}}\sum_{k=1}^{\big{(}N_{0}/2\big{)}-1}{ \rm BER}_{l,k},\quad l=1,2,3,\ldots,S. \tag{64}\]
The overall theoretical BER bound of STACO-ODM can be calculated by using similar formula defined on (47). | Journal of Computer Networks and Communications |
5,000,003 | 10.1155/2014/279320 | 82bd48e9-4b75-46b6-9387-e3820f4c84fc | https://onlinelibrary.wiley.com/doi/pdfdirect/10.1155/2014/279320 | Adal Mesa-Delgado | The Integral Field View of the Orion Nebula | -7.859375 | 82bd48e9-4b75-46b6-9387-e3820f4c84fc.md | 2,014 | chunk_82bd48e9-4b75-46b6-9387-e3820f4c84fc_13.md | wiley | ## 5 The AD Problem from IFS
### Role of Small-Spatial Scale Structures
Given its proximity, the Orion Nebula is the perfect target to investigate the possible relation between the AD problem and the presence of morphological structures. Tackling this issue certainly requires reliable detections of RLs emitted by heavy-element ions to investigate the RL-CEL discrepancy. From optical long-slit spectroscopy with the Intermediate Dispersion Spectrograph and Imaging System (ISIS) at the 4.2 m William Hershel Telescope, the author of this review and collaborators addressed this topic for the first time in the Orion Nebula at spatial scales of 1.2\({}^{\prime\prime}\)[11]. Very deep observations were performed in five slit positions of 3.7\({}^{\prime\prime}\) long each. The slits were arranged on the Huygens region, covering different morphological structures such as proplyds, HH objects, and stratified bars. A total number of 730 one-dimensional spectra were extracted and reliable detections of the O II multiple 1 RLs were reported in 92% of them. Then, the authors could analyze the spatial distribution profiles of the RL-CEL discrepancy of O\({}^{2+}\) abundance, which is usually quantified through the AD factor (ADF). In this review, we adopt the logarithmic form of the ADF, defined as the difference of abundances derived from RLs and CELs. One of the major results of this work was that the ADF(O\({}^{2+}\)) remains rather constant along most of the observed areas of the nebula but showing localized enhancements at the positions of the prominent HH objects HH202, HH203, and HH204. On average, the ADF(O\({}^{2+}\)) is about 0.15 dex, while in the HH areas, the discrepancy increases up to 0.3-0.5 dex. Incorporating IFS has enormously improved our ability to spatially locate with much more precision areas on the nebula having high AD. This is well illustrated in the IFS analysis of the NE-Orion-S edge [37] and HH202 [54], where it was possible to map the emission of O II RLs in both structures. The ADF(O\({}^{2+}\)) maps of these two fields are shown in Figure 7. In the case of NE-Orion-S, the ADF(O\({}^{2+}\)) is slightly higher at the north-east corner of the field, though it does not seem to be related to the presence of any remarkable morphology when we compare it with the HST images of the Huygens region at that exact position (see Figure 1). On the contrary, the results found in the ADF(O\({}^{2+}\)) map of HH202 are encouraging the maximum ADF(O\({}^{2+}\)) is located at the position where the gas flow reaches its maximum velocity, the HH202-S knot. The same research group carried out a subsequent study of HH202-S in which the emissions from the gas flow and the nebular background were spectrally resolved thanks to the high spectral resolution of the observations (\(R\approx 30,000\)). Interestingly, the ADF(O\({}^{2+}\)) in the gas flow component turned out to be \(0.35\pm 0.05\) dex, a much higher discrepancy than the value of \(0.11\pm 0.04\) dex found in the ambient gas. These results also confirm what was found in the long-slit study and suggest a possible connection between high-velocity flows and high AD. To clearly establish the possible role of high-velocity flows in the AD problem, further investigation is still needed. The use of high-spectral resolution IFS would be the ideal observational approach. The IFS studies of the proplyds HST1 and LV2 have also brought new clues into the AD problem (see [65, 66]). The possible role of proplyds was also attempted in the early ISIS work presented above, but those observations did not count on reliable diagnostics to properly determine the proplyd densities. A striking result found in HST1 and LV2 is that the ADF(O\({}^{2+}\)) tends to zero when physical conditions of proplyds are well accounted as in both full opaque and transparent cases. It is observed that the high densities of proplyds produce a clear enhancement of the O\({}^{2+}\) abundances derived from CELs with respect to the nebular background abundances, while those form RLs are basically similar in both cases. | Advances in Astronomy |
26 | 10.3390/app11020500 | 0e32bcc5-37e4-4103-bc81-79bcde92d8bf | https://www.mdpi.com/2076-3417/11/2/500/pdf | Fabrizio Pilo; Giuditta Pisano; Simona Ruggeri; Matteo Troncia | Data Analytics for Profiling Low-Voltage Customers with Smart Meter Readings | -10.03125 | 0e32bcc5-37e4-4103-bc81-79bcde92d8bf.md | 2,021 | chunk_0e32bcc5-37e4-4103-bc81-79bcde92d8bf_20.md | mdpi | # Data Analytics for Profiling Low-Voltage Customers with Smart Meter Readings
## 4 Results and Discussion
### Resulting in Typical Load Profiles
The result of the single customer's characterization process is graphically shown in Figure 6 wherein the real consumption profiles of a given customer on winter workdays are drawn. The variability of consumption in winter weekdays causes the aggregation into three clusters represented by their relevant centroids. In Figure 7 the most crowded cluster in Figure 6 is zoomed in for showing the difference between the centroid (blue line), which can be used to represent that customer in that quarter of the year, and the average profile (red line), obtained by averaging all the real profiles of Figure 6. The average profile tends to flat the valleys in the early hours of the day and reduce morning and evening peaks. Figure 6: Resulting clusters of a given residential customer (first quarter weekday); the most crowded cluster is the first one. Figure 7: Comparison between the centroid of the most crowded cluster (blue line) and the average profile of all the first quarter weekday profiles of Figure 6 (red line). \begin{table}
\begin{tabular}{c c c c c c} \hline \hline
**Database** & **Main** & **Secondary** & **Agricultural** & **Commercial** & **Industrial** \\
**(Year)** & **Residence** & **Residence** & **Agricultural** & **Commercial** & **Industrial** \\ \hline
2013 & 41,289/38,928 & 4577/4577 & 182/180 & 8963/8937 & 1023/1023 \\
2017 & 24,453/22,838 & 2532/2527 & 112/112 & 3537/3533 & 1156/1145 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Share of valid customers/final number of customers clustered (outliers excluded) in the two considered databases. In the following section, for the sake of brevity, the description and discussion of the results focus on three categories of customers (i.e., commercial, industrial, and main residence contract) and two typical days, one in winter and one in summer (i.e., the working days of the first and third quarter. Figures 8-16 show the two databases' resulting centroids, DB2013, and DB2017. The resulting clusters are in descending order, from the largest to the smallest. Furthermore, the comparison between seasons within the same year, and, finally, the comparison between years, for the three categories of customers, are reported. The comparison considers the most similar normalized profiles besides the crowdedness of each cluster with a calculation of the minimum root sum square error:
* The number of resulting clusters may vary from season to season both in DB2013 (i.e., Figures 8a,b and 14a,b) and in DB2017 (i.e., Figures 9a,b and 15a,b) for commercial customers and residential ones; this did not happen for the industrial customers (Figures 11a,b and 12a,b). * The profiles of the two typical days during the two seasons can be similar, as shown in Figures 8c, 9c, 11c, 12c, 14c and 15c, even if in some cases the peaks can be slightly moved or reduced. * Profiles of different years can be almost overlapped (i.e., Figures 10a, 13 and 16). * In some cases, the two-year comparison makes very similar shapes but higher/lower peak values in the two DBs (i.e., Figure 10b). Figure 10: Commercial (COM) end-users, comparison between normalized centroids of the two databases (DB2013 vs. DB2017): **(a)** winter workdays, **(b)** summer workdays. Figure 9: Database 2017, commercial (COM) end-users, normalized centroids: **(a)** winter workdays (four clusters), **(b)** summer workdays (three clusters), **(c)** comparison between the two typical days. Figure 11: Database 2013, industrial (IND) end-users, normalized centroids: (**a**) winter workdays (three clusters), (**b**) summer workdays (three clusters), (**c**) comparison between the two typical days. Figure 12: Database 2017, industrial (IND) end-users, normalized centroids: (**a**) winter workdays (five clusters), (**b**) summer workdays (five clusters), (**c**) comparison between the two typical days. Figure 13: Industrial (IND) end-users, comparison between normalized centroids of the two databases (DB2013 vs. DB2017): (**a**) winter workdays, (**b**) summer workdays. | Applied Sciences |
27 | 10.1186/s13638-021-01981-9 | 0fb283b8-2061-4463-8dc1-4ae4c9ae4ade | https://jwcn-eurasipjournals.springeropen.com/counter/pdf/10.1186/s13638-021-01981-9.pdf | Huu Q. Tran; Ca V. Phan; Quoc-Tuan Vien | Performance analysis of power-splitting relaying protocol in SWIPT based cooperative NOMA systems | -9.375 | 0fb283b8-2061-4463-8dc1-4ae4c9ae4ade.md | 2,021 | springer_4_0fb283b8-2061-4463-8dc1-4ae4c9ae4ade_11.md | springer | ## 1 Proof
Considering the Rayleigh fading channel, \(J_{2}\) can be given by \[J_{2}=1-\exp{\left(\frac{-\tau_{1}}{\Omega_{1}}\right)}. \tag{20}\]
and \(J_{3}\) can be expressed as (see(21)).
\[\begin{split} J_{3}&=\Pr{\left({{{\left|{{h_{2}}} \right|}^{2}}{\left|{{h_{1}}}\right|}^{2}}\psi_{E}\rho\ \zeta_{{{\eta}{h_{2}}}},\frac{{{{\left|{{h_{1}}} \right|}^{2}}\psi_{{\ell}{\alpha}{\rho}}}}{{{\psi_{{\ell}}}{{\left|{{h_{1}}} \right|}^{2}}{\alpha_{1}}{\rho}+1}}>\gamma_{{{\eta}{h_{2}}}}\right)}\\ &=\left\{{\begin{array}{*{20}{c}}{{\Pr{\left({{{\left|{{h_{2}}} \right|}^{2}}<\frac{{{\gamma_{{\eta}{h_{2}}}}}}{{{\left|{{h_{1}}}\right|}^{2}} \psi_{{\ell}{\rho}}}},\left|{{{h_{1}}}}\right|}^{2}}>\frac{{{\gamma_{{\eta}{h_{ 2}}}}}}{{{\psi_{{\ell}}}\rho\left({{{a_{2}}}-{\alpha_{1}}{\gamma_{{\eta}{h_{ 2}}}}}\right)}}}\right)},{{\alpha}_{2}}>{{\alpha}_{1}}{\gamma_{{\eta}{h_{2}}}}\\ &\ 0,\ {{\alpha}_{2}}\leq{{a}_{1}}{\gamma_{{\eta}{h_{2}}}}\\ &=\frac{\int\limits_{{{\eta}{h_{2}}}}^{\infty}}{\frac{{{\gamma_{{\eta}{h_{2} }}}}}{{{\psi_{{\ell}}}\rho\left({{{a_{2}}}-{\alpha_{1}}{\gamma_{{\eta}{h_{2}}}} }\right)}}}}\int\limits_{0}^{{{\frac{{{\gamma_{{\eta}{h_{2}}}}}}{{{\psi_{{ \ell}}}\rho}}}}}{{{\int\limits_{{{\eta}{h_{1}}}}^{2}}}}{{{\left|{{h_{1}}} \right|}^{2}}}(x){f_{{\left|{{h_{2}}}\right|}^{2}}}(y)dxdy=\int\limits_{{{\tau} _{1}}}^{\infty}\frac{1}{\Omega_{1}}\left[{1-\exp{\left({\frac{{{-\gamma_{{ \eta}{h_{2}}}}}}{{{\pi\psi_{E}}\rho\Omega_{2}}}}\right)}}\right]\exp{\left({ \frac{{-x}}{{{\Omega_{1}}}}}\right)}dx.\end{split}\right. \tag{21}\]
The outage probability at \(D_{2}\) is given by
\[P_{D_{2},modir}=J_{2}\,+\,J_{3}. \tag{22}\]
\(\square\)
**Corollary 2**.: _The outage probability at \(D_{2}\) for high SNR can be determined as (see(23)), where \(K_{1}(.)\) is the first order modified Bessel function of the second kind [55, Eq.(3.324.1)]._
\[\begin{split} P_{D_{2},modir}^{\infty}&=\,\Pr{ \left({\frac{{{\alpha}_{2}}}{{{\alpha}_{1}}}<{\gamma_{{\eta}{h_{2}}}}}\right)} +\Pr{\left({{{\left|{{h_{2}}}\right|}^{2}}<\frac{{{\gamma_{{\eta}{h_{2}}}}}}{{ {\psi_{E}}\rho{{\left|{{h_{1}}}\right|}^{2}}}},\frac{{{\alpha}_{2}}}{{{\alpha} _{1}}}>{\gamma_{{\eta}{h_{2}}}}}\right)}\\ &=\,\Pr{\left({{{\left|{{h_{2}}}\right|}^{2}}<\frac{{{\gamma_{{ \eta}{h_{2}}}}}}{{{\psi_{E}}\rho\left|{{{h_{1}}}}\right|}^{2}},\frac{{{\alpha} _{2}}}{{{\alpha}_{1}}}>{\gamma_{{\eta}{h_{2}}}}}\right)}\\ &=\,\int\limits_{0}^{\infty}{\left[{1-\exp{\left({\frac{{{- \gamma_{{\eta}{h_{2}}}}}}{{{\psi_{E}}\rho\Omega_{2}}x}}\right)}}\right]}\frac{1}{ \Omega_{1}}\exp{\left({\frac{{{-x}}}{{{\Omega_{1}}}}}\right)}dx\\ &=1-2\sqrt{\frac{{{\gamma_{{\eta}{h_{2}}}}}}{{{\psi_{E}}\rho \Omega_{1}\Omega_{2}}}}K_{1}{\left({2\sqrt{\frac{{{\gamma_{{\eta}{h_{2}}}}}}{{ \psi_{E}}\rho\Omega_{1}\Omega_{2}}}}\right)}.\end{split} \tag{23}\] | EURASIP Journal on Wireless Communications and Networking |
28 | null | 7afe9146-c070-4f5c-92e4-1df0cca3d84d | http://jase.tku.edu.tw/articles/jase-201303-16-1-11.pdf | null | null | -10.210938 | 7afe9146-c070-4f5c-92e4-1df0cca3d84d.md | null | chunk_7afe9146-c070-4f5c-92e4-1df0cca3d84d_9.md | tku_press | # Fuzzy Data Mining as a Tool to Infer Pollution Severity
## 5 Data Mining Using Fuzzy C-Means Clustering
Figure 9 shows the LC\({}_{\text{peak}}\)-THD relationships captured at medium polluted conditions. When compared with lightly polluted conditions, the number of data points in the cluster 2 is slightly increased. However, there is no significant increase in data points corresponding to cluster 3 and 4. Similar plots of LC\({}_{\text{peak}}\)-THD relationships captured at heavily polluted and very heavily polluted conditions are shown in Figures 10 and 11 respectively. From these figures, it is clear that number of data points in cluster 3 and cluster 4 increases considerably with respect to increase in pollution. This fuzzy clustered LC data plots clearly indicates the surface pollution condition of the insulators. For practical applications, it can be speculated that cluster density above certain threshold value could warrant corrective actions and which will be useful for substation operator. The correctness of fuzzy c-means clustering algorithm results should be verified by using appropriate criteria and techniques. There are several methods proposed in the literatures to validate the accuracy of the clusters [15]. Measuring the distance between the clusters is a common approach, which is done by measuring the distance between the closest members or the distant members of the clusters. However, measuring the dis
Figure 8: Typical plot of fuzzy clustering process convergence criterion with respect to number of iterations under lightly polluted conditions. Figure 7: Typical LCpeak-THD data of leakage current signals under lightly polluted conditions (a) before clustering, (b) after clustering. tance between the centers of the clusters (or) centroids aims at finding the best clustering scheme. In this paper, measuring the distance between the centroids method is used to measure the cluster accuracy. Figure 12 shows the centroids of the clusters obtained at four different polluted conditions as discussed earlier. Distance between the centroids of the clusters is denoted as D12, D13, D14, etc. as shown in Figure 12. At each pollution condition, the distance between the centroids of the clusters \(i\) and \(j\) were calculated using the equation
Figure 4:0
where \(x\) and \(y\) are the \(\left(x,y\right)\) coordinates of the respective centroid point. Figure 13 shows the bar chart of the distance between the centroids calculated at four different pollution conditions. It is observed that the distance between the centroids of the clusters are closely located at each pollution condition. In order to understand the deviation in the distance between the cluster centroids, standard deviation is calculated using the following equation,
Figure 4:1
where \(\mu\) is the mean of the vector D\({}_{ii}\) and N is the length of the vector D\({}_{ij}\). The standard deviation values obtained for each distance between the centroids are also shown above corresponding bar chart in Figure 4:2 It is observed that standard deviation value varies from 0.68 to 2.7, which is very less and within acceptable limit. It clearly indicates that the fuzzy c-means technique is more reliable for clustering the leakage current data of the power transmission line insulators. From the above reported results, it is noticed that the fuzzy c-means clustering technique is very much useful for easy identification of surface pollution severity of insulators used for high voltage applications. It is also ob
Figure 4:3 Typical LCpeak-THD data of leakage current signals under heavily polluted conditions (a) before clustering, (b) after clustering. Figure 4:4 Typical LCpeak-THD data of leakage current signals under medium polluted conditions (a) before clustering, (b) after clustering. served that the leakage current magnitude and THD relationships are directly related with surface pollution severity. This can be easily understood from the cluster plot of insulator obtained at different pollution conditions. | null |
29 | 10.13164/re.2015.0757 | 618f03c0-b074-49b7-85a5-145df03fe954 | https://www.radioeng.cz/fulltexts/2015/15_03_0757_0764.pdf | S. A. Hosseini; B. Abolhassani; S. M. S. Sadough | A New Protocol for Cooperative Spectrum Sharing in Mobile Cognitive Radio Networks | -9.210938 | 618f03c0-b074-49b7-85a5-145df03fe954.md | 2,015 | chunk_618f03c0-b074-49b7-85a5-145df03fe954_6.md | radioeng_cz | # A New Protocol for Cooperative Spectrum Sharing
## 2 Network Model
As depicted in Fig. 1, we consider a CR network in which each CR node can move with a random speed \(0<v<6\) m/s, or stay fixed in its location during a given stop-time period. The transmit power of each node \(P_{t}\) is assumed to be fixed. The considered CR network and \(M\) primary networks are located in the same geographical area. Figure 1 shows the above-mentioned scenario under condition with \(M=2\). Both primary networks use two non-overlapping equal bands. An Ad-hoc CR network with 40 moving nodes is considered in our simulations. There is no central controlling unit in this network and all nodes can communicate with their neighbors using a common control channel (CCC). For packet transmission, we use hand shaking protocol (RTS-CTS-AKC) [21], [22]. A link request is randomly made in this model and the number of source and destination nodes, their distances from each other and the interference levels at the receiver are not known before the link request is made. After each RTS, the two recent parameters (distance between sender and receiver and interference level) are estimated by the receiver and are sent to the sender with a controlling response of CTS. Connection is not possible between these two nodes if the required distance in CTS would be more than \(R_{c}\) and consequently a blockage event occurs. However, connection would be possible for this link if \(R_{C}\) is more than the required distance. Each CR user must be able to send over all allowable frequencies of primary users in simultaneous sending. To resolve this issue, each CR usually has a bank of fixed filters to gain this feature and by which it can select the desired sub-channel in each time period. Each CR has \(1\leq n_{t}\leq L\) transceivers, where \(L\) is the number of frequencies of a primary network. Secondary users (SUs) cooperate with each other to maximize the throughput of the whole CRN. Cooperation of SUs refers to sharing local information (including SINR of receiving signal links and requested rate for the specific service) with other SUs.
In multi-channel MAC protocols, it is assumed that channel frequencies are determined as a constant parameter at the beginning of allocating a link and they remain unchanged during the linking procedure. However, it is probable that the operating frequency changes during the linking procedure of some of the protocols such as BMC and DDMAC. These two recent methods are more similar to our proposed method; thereby we compare our proposed method to these two methods in our simulations. In the BMC (best multi-channel) protocol, the first selected channel is the channel having the highest data rate. | Radioengineering |
5,000,004 | 10.1029/2023ea003380 | fe7b3dfd-d30d-4e63-ad05-64f1b4d28ea8 | https://agupubs.onlinelibrary.wiley.com/doi/pdfdirect/10.1029/2023EA003380 | Yaxian Li; Wanlin Gong; Chunxiao Yan; Kai Zhu; Min Zhang; Qiang Zhang | Freshly Developed Low‐Latitude Postmidnight‐To‐Dawn F‐Region Ionospheric Irregularities Over China on 13 November 2015 | -10.546875 | fe7b3dfd-d30d-4e63-ad05-64f1b4d28ea8.md | 2,024 | chunk_fe7b3dfd-d30d-4e63-ad05-64f1b4d28ea8_11.md | wiley | # Earth and Space Science
## 4 Discussion
and Tkisi, Russia, during 9-24 UT on November 13 are presented in Figure 8. For comparison, the IMF Bz and AE index are also exhibited in Figures 8a and 8b, respectively. The AE index is calculated by utilizing the upper and lower envelops of the H component disturbances observed by the selected magnetometers in the Northern Hemispheric auroral zone. Thus the AE index variations alone are insufficient to pinpoint the precise time and position of substorm onset ([PERSON] & [PERSON], 1983). From Figures 8c and 8d, the H component over the two stations exhibited the sharp declination after 18 UT on 13 November 2015, indicating their closest approach to the location of substorm's onset. This can also help us to pinpoint the time of substorm's onset. A remarkable decrease of H component occurred around 18:30 UT (denoted by the red arrows), indicating that the onset of the substorm occurred tens of minutes before the F-layer uplift at Fuke and Sanya (yellow shaded area). Then the overshielding PEF with eastward polarity can be activated rapidly following the substorm onset, according to observations and simulation results ([PERSON] et al., 2011; [PERSON] et al., 2018; [PERSON] et al., 2009; [PERSON] et al., 2009). To sum up, the sudden northward excursion of IMF Bz during 17:43-18:02 UT (01:01-01:20 LT) and 18:20-18:45 UT (01:38-02:03 LT) as well as the substorm onset after 18:30 UT (01:48 LT) might collectively activate the overshielding PEF with eastward polarity in the postmidnight sector. Though there existed the westward undershielding PEF resulted from the comparable rapid southward turning of IMF Bz occurring during 18:10-18:20 UT (01:28-01:38 LT), the collective effects of eastward overshielding PEF during the postmidnight period might surpass the westward undershielding PPEF. Consequently, the equatorial/low-latitude zonal electric fields during the postmidnight sector were greatly modulated and then the rapid and significant F-layer elevation were induced (after 02:18 LT), which provided the favorable conditions for the R-T instability growth and EPB generations. About 30 min after the F-layer was elevated to its peak height (04:03 LT), the fresh and evolutionary FAI echoes emerged in the HCOPAR's FoV (04:37 LT) and the EPB-related TEC depletions and ROTI enhancements were visible in the TEC measurements (\(\sim\)04-06 LT). Figure 8.— The variations of the (a) IMF Bz, (b) AE index, H component (c) at Vize Island, Russia, and (d) at Tkisi, Russia with universal time. The yellow shaded area indicates the time period with upward plasma drift at Sanya. Red arrows indicate the substorm onsets. | Earth and Space Science |
30 | 10.3390/app142411960 | a7fab3b6-9996-4c62-bc35-439e364a4d45 | https://www.mdpi.com/2076-3417/14/24/11960/pdf | Siyuan Cao; Ying Yuan; Xiaodong Sun; Miao Zhang; Ningbo Han; Aihong Zhou; Wensong Zhang | The Debris Flow Risk Prediction Model Based on PCA-Elman | -9.984375 | a7fab3b6-9996-4c62-bc35-439e364a4d45.md | 2,024 | chunk_a7fab3b6-9996-4c62-bc35-439e364a4d45_20.md | mdpi | # The Debris Flow Risk Prediction Model Based on PCA-Elman
## 4 Results
### Generalization Ability
When developing a predictive model, the objective is to not only generate accurate predictions on existing data but also to ensure the model's reliability when applied to new, unseen data. In this investigation, 26 mudslide samples from Bailong River Basin, as referenced in [PERSON] et al. [28], were utilized to conduct hazard assessments to validate the predictive capacity of PCA-Elman model on unfamiliar data, as illustrated in Table 6. Owing to regional disparities, different evaluation indicators were chosen; consequently, out of the 10 indicators selected for Yunnan Province, 7 were derived from Bailong River Basin, while the remaining 3 indicators (S2, S6, S10) underwent null value treatment.
From the 26 debris flow samples, a set of 9 samples was selected, including 3 samples from each of the different risk levels, to serve as the model's prediction set. Additionally, to mitigate the potential impact of data imbalance on the model, the training set utilized a dataset expanded with the ADASYN algorithm. The 63 samples were divided into 5 subsets, and 4 of these subsets were selected in rotation to form the training set, ensuring that each subset served as the training set once, with the experiment repeated 5 times. After principal component analysis, the PCA-Elman model, PCA-BP model, PCA-SVM model, and PCA-RF model were established, with their prediction results shown in Table 6. Compared to the PCA-BP, PCA-SVM, and PCA-RF models, the PCA-Elman model demonstrated higher accuracy when dealing with unknown debris flow samples, indicating a stronger generalization capability.
Furthermore, according to the research findings in reference [29], spatial variability is present during the study of debris flow risk. When dealing with debris flow samples from two different regions, the differences in geological conditions such as topography, geomorphology, and meteorological hydrology between these areas may impact the model's predictive
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline
**ModelNumber** & **1** & **2** & **3** & **4** & **5** & **Average Accuracy** \\ \hline PCA-Elman & 66.67\% & 55.56\% & 44.44\% & 55.56\% & 33.33\% & 51.11\% \\ PCA-BP & 33.33\% & 44.44\% & 44.44\% & 33.33\% & 33.33\% & 37.77\% \\ PCA-SVM & 22.22\% & 22.22\% & 33.33\% & 33.33\% & 11.11\% & 24.42\% \\ PCA-RF & 44.44\% & 33.33\% & 33.33\% & 44.44\% & 44.44\% & 39.99\% \\ \hline \hline \end{tabular}
\end{table}
Table 6: Model prediction accuracy on unknown data.
accuracy. To address this, the remaining 17 debris flow samples from the Bailongjiang River basin were evenly distributed into 5 subsets, and cross-validation was used again to predict the 9 Bailongjiang River debris flow samples. The results are shown in Table 7. Compared to the results in Table 6, it was found that incorporating a certain amount of data from the Bailongjiang River basin into the training samples can reduce spatial variability and significantly improve the model's predictive accuracy for unknown regions. | Applied Sciences |
31 | 10.7305/automatika.2016.07.1084 | ebbc68ca-98bb-4380-8f30-d19724bc10ca | https://www.tandfonline.com/doi/pdf/10.7305/automatika.2016.07.1084?download=true | Tadej Justin; France Mihelič; Janez Žibert | Towards automatic cross-lingual acoustic modelling applied to HMM-based speech synthesis for under-resourced languages | -9.625 | ebbc68ca-98bb-4380-8f30-d19724bc10ca.md | 2,016 | chunk_ebbc68ca-98bb-4380-8f30-d19724bc10ca_12.md | taylor_and_francis | ## 2 Methodology
### Cross-language phoneme mapping
#### Automatic cross-language phoneme mapping technique
To find the most similar phonemes from different language speech databases and different speakers we propose an approach that was adopted from the field of speaker verification. The approach is based on modelling the acoustic space data with Gaussian mixture models (GMMs) that are derived from the initial universal background model (UBM) [29] using a maximum a posteriori (MAP) adaptation [30]. The UBM serves as the initial model for the MAP adaptation. In our case we tried to build the UBM on all the acoustic data and then the language dependent GMMs for each individual phoneme were estimated by performing a MAP adaptation on language-dependent data. The UBM is prone to an unbalanced data population, i.e., in the speaker verification task the female and male speech data should be balanced, otherwise the obtained UBM would be biased towards the dominant sub population. In our case we try to overcome the problem of language-dependent unbalanced speech data. One of the possible solutions is to develop only a target language-dependent UBM model. Since we disposed of a small amount of the Slovenian speech data and the normalization of the likelihood scores of the language-dependent GMM also gains an important role. The normalization allows us to find phoneme differences or similarities in \begin{table}
\begin{tabular}{|c|c|c|c|} \hline
**VNTV** & **CMU** & **CMU** & **IPA** \\ & **automatic** & **manually** & **mapping** \\ & **mapped** & **mapped** & \\ \hline a & 9 & 9 & - \\ \hline a: & a & 9 & - \\ \hline b: & 9 & 9 & - \\ \hline b & w & b & b \\ \hline d & d & d & d \\ \hline e & \(\varepsilon\) & \(\varepsilon\) & - \\ \hline e: & \(\varepsilon\) & \(\varepsilon\) & - \\ \hline g & 9 & 9 & 9 \\ \hline \(\varepsilon\): & i & 9 & - \\ \hline
3: & 9 & 9 & - \\ \hline f & v & f & f \\ \hline g & u & 9 & g \\ \hline i & e & i & i \\ \hline i & e & i & - \\ \hline i & \(\varepsilon\) & \(\delta\) & i \\ \hline j & \(\varepsilon\) & j & j \\ \hline k & k & k & k \\ \hline l & m & i & - \\ \hline \(\bar{\Lambda}\) & n & i & - \\ \hline m & \(\eta\) & m & m \\ \hline n & n & n & n \\ \hline \(\eta\) & \(\eta\) & \(\eta\) & - \\ \hline o & o o & a & - \\ \hline o: & o o & a & - \\ \hline p & p & p & p \\ \hline r & r & r & r \\ \hline s & s & s & s \\ \hline f & f & f & f \\ \hline t & t & t & t \\ \hline ts & s & joined t and s & - \\ \hline t g & f & t & t & t \\ \hline u & o o & u & u \\ \hline u: & o o & u & - \\ \hline v & o o & v & v \\ \hline w & o o & w & w \\ \hline x & k & h & h \\ \hline z & z & z & z \\ \hline
3 & 5 & 5 & 5 \\ \hline \end{tabular}
\end{table}
Table 2: The Slovenian-English phoneme mapping table comparison in the IPA phonetic for an automatic and manual approach and IPA phonetics the normalized spaces. With the use of small target-language-dependent UBM and MAP adaptation we obtained phoneme GMMs. The prior UBM model is therefore used in terms of the initial initialization of the training process with the MAP adaptation and later on used for additionally normalizing the cross-language phoneme distances. The training of the UBM was performed using the EM algorithm [31] to estimate the language-dependent GMM densities. The initialization of the UBM training was performed with the Linde-Buzo-Gray hierarchical method [32]. The training of the UBM is performed in an unsupervised manner. We trained all the available unlabelled data from the under-resourced language based on the acoustic features MFCC [33]. For each utterance we calculated the MFCC vector with the HTK toolkit [34]. We obtained feature vectors with a length of 36 features consisting of 12 MFCC coefficients, 12 delta and 12 delta-delta coefficients per frame. The feature extraction was guided by the following parameters: 32-ms-wide Hamming window with a 10-ms frame shift, a low cut-off frequency of 300 Hz and a high cut-off frequency of 7600 Hz and with a pre-emphasis coefficient of 0.97. | Automatika |
5,000,005 | 10.1002/2016ea000204 | 34b73566-ff98-4202-a5c7-77c9e7ddaaa6 | https://agupubs.onlinelibrary.wiley.com/doi/pdfdirect/10.1002/2016EA000204 | K. W. Bowman; J. Liu; A. A. Bloom; N. C. Parazoo; M. Lee; Z. Jiang; D. Menemenlis; M. M. Gierach; G. J. Collatz; K. R. Gurney; D. Wunch | Global and Brazilian Carbon Response to El Niño Modoki 2011–2010 | -10.828125 | 34b73566-ff98-4202-a5c7-77c9e7ddaaa6.md | 2,017 | chunk_34b73566-ff98-4202-a5c7-77c9e7ddaaa6_22.md | wiley | ## 6 Results
### Brazilian Carbon Flux Change in Global Context
(2015) found that the net carbon flux change was \(-0.34\pm 0.60\) PgC yr\({}^{-1}\), which is slightly higher than our results, but less than the \(-0.42\pm 0.2\) PgC yr\({}^{-1}\) from [PERSON] et al. (2014). The fire carbon flux change reported in [PERSON] et al. (2015), [PERSON] et al. (2014), and this study range from \(-0.21\) to \(-0.24\) PgC yr\({}^{-1}\). [PERSON] et al. (2016) used the aircraft fdata in [PERSON] et al. (2014) to estimate a basinwide nonfire net biome exchange (NBE) carbon flux change of \(-0.28\pm 0.45\) PgC yr\({}^{-1}\). Based upon the uncertainties, however, that study cannot reject a neutral NEP carbon flux change. The increase in 2011 relative to 2010 of both respiration and GPP could be explained by a number of processes, such as a link between heterotrophic respiration and soil moisture (e.g., [PERSON] et al., 2013), or lagged effects such as tree mortality, (e.g., [PERSON] et al., 2013), which would offset the increased GPP from faster carbon pools. [PERSON] et al. (2015) used upscaled site level data to show that drought suppressed Amazon-wide photosynthesis in 2010 by 0.38 PgC (0.23-0.53 PgC) and that respiration increased in 2011 relative to 2010 driven primarily by a reduction in autotrophic respiration in 2010. An important consideration in comparing these studies is the relationship between the observational coverage and the spatial domain of fluxes that influences those observations. In the case of flux inversions using aircraft data, the zonal CO\({}_{2}\) gradient across the basin was exploited. Flux inversions with satellite column CO\({}_{2}\) observations, on the other hand, use observations over a much broader domain based upon the sensitivity of the observations to surface fluxes. Figures 53 and 54 show the annual data density of quality-filtered GOSAT observations at 4\({}^{\circ}\times 5^{\circ}\) for 2010 and 2011, respectively. The spatial pattern of annual sampling is approximately the same between these two years. In both years, most of the observations in Brazil are south of the Amazonian basin. [PERSON] et al. (2015) quantified the source-receptor relationships between concentrations and fluxes for January and July 2010. This study showed a strong impact of tropical South American fluxes on midlatitude observations with sensitivities exceeding 0.2 ppm/KgC/m\({}^{2}\)/s (see Figure 6 in [PERSON] et al., 2015). The transit time from concentrations emitted in tropical South America to midlatitude South America is within a couple of days, and the dwell time lasts over a month indicating continual influence of tropical fluxes. Based upon an Observing System Simulation Experiment, removal of midlatitude South American observations led to over a 50% impact on western Amazonian fluxes where cloud cover is most persistent (see Figure 12 in [PERSON] et al., 2015). On the other hand, tropical South American concentrations -- especially eastern Amazon -- are influenced by tropical African fluxes in both January and July. The cumulative sensitivity over a 1 month time frame of tropical African fluxes to tropical South American concentrations is roughly 25% (see Figure 12 in [PERSON] et al., 2015). Consequently, the inversion system exploits the meridional source-receptor relationships between Amazonian fluxes and midlatitude GOSAT observations more so than basinwide zonal gradients observed by aircraft. For this source-receptor relationship, we do not expect any influence on the flux changes given the nearly identical annual sampling over midlatitude South America as shown in Figures 53 and 54. These source-receptor relationships facilitate the interpretation of posterior CO\({}_{2}\) concentration comparisons to independent aircraft observations used in [PERSON] et al. (2014). | Earth and Space Science |
32 | 10.3390/app10082681 | a014a8b5-add0-4a8a-9cd9-249cb39e3a38 | https://www.mdpi.com/2076-3417/10/8/2681/pdf | Kaito Miwa; Hiroki Ebihara; Xu Fang; Wakana Kubo | Photo-Thermoelectric Conversion of Plasmonic Nanohole Array | -10.40625 | a014a8b5-add0-4a8a-9cd9-249cb39e3a38.md | 2,020 | chunk_a014a8b5-add0-4a8a-9cd9-249cb39e3a38_8.md | mdpi | # Photo-Thermoelectric Conversion of Plasmonic Nanohole Array
## 3 Results and Discussion
### Quantification of Plasmonic Local Heating
reported that temperature increases for periodic plasmonic nanoparticles cannot be estimated by considering the nanoparticles individually [26]. They showed that a temperature shift originating from a two-dimensional array of 16 gold nanoparticles was enhanced more than four times with respect to that of a single nanoparticle, because the heating effect can be enhanced as a result of the accumulation effects and inter-particle coulombic interactions [27]. We calculated the local heating temperatures produced upon changing the unit numbers of nanoholes from 1 to 49. As the unit number increases, the amount of plasmonic local heating increases because of a rise in the external temperature generated by the neighboring nanoholes [28; 29]. Figure 4 shows a logarithmic fit of plasmonic local temperature versus the number of nanohole units in the array. The fit to the data, also shown in in Figure 4, is expressed analytical by Equation (3). \[y=0.9171\mathrm{ln}(x)+1.239 \tag{3}\]
As the number of nanoholes included in an illumination spot is \(4.27\times 10^{4}\), the estimated plasmonic local heating temperature at the illumination spot is 11.0 K, which is more than 2.4 times higher than that estimated from the experimental data. A possible reason for this difference between the temperatures estimated experimentally and numerically, is the discrepancy between the optical power used in the simulation and the actual power of the illuminated light. Our laser beam has a Gaussian profile, whereas we selected a uniform light spot for the illumination calculations. However, the actual light power illuminating each nanohole is different. Nonetheless, it should be noted that our calculation technique generated a relatively good estimation for the nanoholes of the temperature of the plasmonic local heating, which contributes to the thermoelectric conversion process. In other words, this calculation technique is valid for plasmonic local heating and should facilitate the optimization of configurations for plasmonic local heating. Figure 3: Electric field enhancement produced by plasmonic local heating. Cross-sectional view of nanohole: (**a**) scanning electron microscopy (SEM) image; and (**b**) electric field distribution. | Applied Sciences |
33 | null | 215caea0-a25f-44f0-8261-5288ed7eb007 | https://en.wikipedia.org/wiki/Rebreather | null | null | -10.132813 | 215caea0-a25f-44f0-8261-5288ed7eb007.md | null | chunk_215caea0-a25f-44f0-8261-5288ed7eb007_9.md | wikipedia | Purging should be done while breathing off the unit so that the inert gas in the user's lungs and body tissues that finds its way into the loop is also removed from the system. [edit]Carbon dioxide buildup will occur if the scrubber medium is absent, badly packed, inadequate or exhausted. The normal human body is fairly sensitive to carbon dioxide partial pressure, and a buildup will be noticed by the user. However, there is not often much that can be done to rectify the problem except changing to another breathing gas supply until the scrubber can be repacked. Continued use of a rebreather with an ineffective scrubber is not possible for very long, as the levels will become toxic and the user will experience extreme respiratory distress, ultimately leading to loss of consciousness and death. The rate at which these problems develop depends on the volume of the circuit and the metabolic rate of the user at the time. Carbon dioxide buildup can also occur when a combination of exertion and work of breathing exceeds the capacity of the user. If this occurs where the user cannot reduce exertion sufficiently, it may be impossible to correct. This problem is more likely to occur with diving rebreathers at depths where the density of the breathing gas is severely elevated. [31][32][33] The only recourse is to vent the expelled breath outside the closed system, therefore not reusing the oxygen, and thereby increasing use of the gas mixture, but this is not an option in every field of application. Leakage of toxic gases into the breathing loop
[edit]Industrial rebreathers are often used where the ambient air is contaminated and may be toxic. Parts of the loop will be at a slightly lower than external ambient pressure during inhalation, and if the circuit is not airtight external gases may leak in. This is a particular issue around the edge of a full-face mask, where the rubber mask skirt must seal against the user's face. Fire hazards of high concentration of oxygen
[edit]High partial pressures of oxygen greatly increase fire hazard, and many materials which are self-extinguishing in atmospheric air will burn continuously in a high oxygen concentration. This is more of a hazard for terrestrial applications such as rescue and firefighting than for diving, where the ignition risk is relatively low. Caustic cocktail
[edit]Caused by a loop flood reaching the absorbent canister, so only applicable in immersed applications. Failure modes
[edit]Scrubber failure
[edit]The term "break-through" means the failure of the scrubber to continue removing sufficient carbon dioxide from the gas circulating in the loop. This will inevitably happen if the scrubber is used too long, but can happen prematurely in some circumstances. There are several ways that the scrubber may fail or become less efficient:
- Complete consumption of the active ingredient in a "general break through". Depending on scrubber design and user workload, this may be gradual, allowing the user to become aware of the problem in time to make a controlled exit or bailout to open circuit, or relatively sudden, triggering an urgent or emergency response. - Bypassing the absorbent. The absorbent granules must be packed closely so that all exhaled gas comes into contact with the surface of soda lime and the canister is designed to avoid any large spaces or gaps between the absorbent granules or between the granules and the canister walls that would let gas bypass contact with the absorbent. If any of the seals, such as O-rings, or spacers that prevent bypassing of the scrubber, are not present or not fitted properly, or if the scrubber canister has been incorrectly packed or fitted, it may allow the exhaled gas to bypass the absorbent, and the scrubber will be less effective. This failure mode is also called "tunneling" when absorbent settles to form void spaces inside the canister. Bypass will cause an unexpected early break-through. | null |
34 | 10.48550/arXiv.2408.15639 | c25692b7-b6f9-496a-95c2-36fefb4f09a9 | https://arxiv.org/pdf/2408.15639 | Beatriz Soret; Israel Leyva-Mayorga; Antonio M. Mercado-Martínez; Marco Moretti; Antonio Jurado-Navas; Marc Martinez-Gost; Celia Sánchez de Miguel; Ainoa Salas-Prendes; Petar Popovski | Semantic and goal-oriented edge computing for satellite Earth Observation | -6.148438 | c25692b7-b6f9-496a-95c2-36fefb4f09a9.pdf | 2,024 | arxiv_1_c25692b7-b6f9-496a-95c2-36fefb4f09a9_9 | arxiv | # Semantic and goal-oriented edge computing for satellite Earth Observation
## III LEO satellite constellations as an edge layer
With the advent of softwarization of network functions, modern network edge elements oftentimes possess general-purpose processors. Edge computing exploits the processing resources at the network edge nodes to execute algorithms that operate either on user or local data. Edge computing reduces the latency when compared to cloud computing for user-initiated tasks, and enables traffic offloading and energy minimization using algorithmic compression [1].
These benefits are particularly relevant in LEO satellites, often organized in constellations [1]. The LEO satellite constellation can provide an edge layer to existing EO satellites, or the satellites can be multi-purpose and do both the data acquisition and processing. Moreover, the density of the constellation determines the mode of operation and performance. Sparsely deployed LEO constellations have intermittent connectivity through the feeder links. In these cases, the latency with satellite edge computing can be as low as a few tens of milliseconds, whereas the latency with cloud computing can be in the order of a few hours, until the satellite can find a path towards the cloud server, requiring a high storage capacity at the satellite. If the constellation is densely deployed and inter-satellite link (ISL) [14] are implemented, the satellites operate as a distributed edge computing architecture for distributed learning that avoids long propagation delays and increased traffic loads in the links towards the cloud servers. The feeder links and the ISLs can be based on conventional Radio Frequency (RF) technology, although there is a growing interest for free-space optical (FSO) and hybrid FSO/RF solutions.
The conventional approach to EO is sending the raw data to ground. This requires not only high capacity in the communication links but also storage at the space segment when the connectivity is intermittent. The feeder links, uplink (UL) and downlink (DL), are particularly prone to congestion, since these are usually the links with the lowest capacity due to the movement of the satellites and the impact of atmospheric conditions. Satellite edge computing can alleviate congestion and expedite data processing in EO by providing the computing capability to run the both classical and AI-based algorithms on-board the satellites. Namely, images can be processed and compressed by the cooperating edge satellites, either with a classical algorithm, such as JPEG, or with more advanced semantic-empowered processing algorithm (e.g., object recognition and prediction), before being sent to the ground station (GS). | null |
35 | 10.1186/s13635-025-00203-9 | ec2ab4de-8f5b-412b-8b6d-9f2cbac0f2ac | https://jis-eurasipjournals.springeropen.com/counter/pdf/10.1186/s13635-025-00203-9.pdf | Yehia Ibrahim Alzoubi; Alok Mishra | Differential privacy and artificial intelligence: potentials, challenges, and future avenues | -9.851563 | ec2ab4de-8f5b-412b-8b6d-9f2cbac0f2ac.md | 2,025 | springer_3_ec2ab4de-8f5b-412b-8b6d-9f2cbac0f2ac_10.md | springer | ## 4 Potentials of differential privacy and artificial intelligence combination
### Enhancing privacy
Differential privacy enables AI models to be developed and used on sensitive data while mathematically protecting individual privacy. This is critical in industries such as healthcare and banking, where data security is vital. This
\begin{table}
\begin{tabular}{l l l} \hline \hline
**Article** & **Articles source** & **Total** \\ \hline
[3, 7, 8, 10, 13, 15, 16, 17, 19, 21, 26, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54,
theme has been reported in 13 references (27%) from the selected articles. Table 3 illustrates how differential privacy techniques can improve AI privacy.
By incorporating controlled noise into data or query results, differential privacy prevents the identification of specific individuals, even within large datasets, ensuring that individual data points remain confidential [14]. This added noise masks the details of any single individual's data point while allowing the model to learn general trends and patterns [42, 49]. The noise level can be adjusted to balance the desired privacy level and acceptable accuracy trade-off [64]. Differential privacy protects against inference attacks by ensuring that outputs do not reveal individual data points, thereby strengthening the security of AI systems [41]. It can also be integrated with federated learning, where AI models are trained across decentralized devices while keeping data localized, ensuring that even if local data is compromised, the overall privacy of individuals is maintained [19]. Moreover, this added noise allows AI models to learn patterns without compromising individual patient privacy [49, 11].
Many sectors are subject to rigorous data privacy requirements, including GDPR, HIPAA, and CCPA. Differential privacy enables enterprises to comply with these rules by providing measurable privacy assurances and ensuring that AI models manage data per legal norms. The authors in [58] emphasized the significance of reviewing and standardizing differential privacy approaches for reliable AI development [58]. Furthermore, the authors in [16] stressed the need for standardization and recommended practices for implementing differential privacy in real-world systems. It examines many issues and potential solutions to ensure the efficacy and dependability of differential privacy strategies [16].
Several studies have explored how differential privacy can be applied to NLP models, which often deal with sensitive information like text data [19]. The authors in [41, 50], and [46] explored specific differential privacy mechanisms that can be used to achieve enhanced privacy protection in AI applications. They studied the Lipschitz property-based differential privacy classification, the RNN-based differential privacy, and the privacy-aware trajectory generation model with differential privacy (differential privacy-TrajGAN), respectively. The authors in [19] highlighted the potential of differential privacy to not only protect privacy but also improve the overall utility (accuracy) of AI models in some cases. This can happen because adding noise can sometimes reduce the impact of biases present in the data, leading to more accurate models [11].
Implementing differential privacy in AI models is crucial for ensuring privacy when working with sensitive data, but several challenges arise in real-world applications. Differential privacy's core mechanism--adding noise to the data--can preserve privacy, but it may also reduce the model's accuracy if not carefully tuned [15]. The feasibility of enhancing privacy with differential privacy depends on careful tuning of the privacy-accuracy trade-off and computational resource allocation [64]. Its adoption in privacy-sensitive industries is growing, but the practical implementation still faces challenges. | EURASIP Journal on Information Security |
36 | 10.3390/s17030452 | b99ff078-5df9-4c2a-9500-3b162dc07b11 | https://www.mdpi.com/1424-8220/17/3/452/pdf | Hong-Hu Zhu; Bin Shi; Cheng-Cheng Zhang | FBG-Based Monitoring of Geohazards: Current Status and Trends | -8.710938 | b99ff078-5df9-4c2a-9500-3b162dc07b11.md | 2,017 | chunk_b99ff078-5df9-4c2a-9500-3b162dc07b11_21.md | mdpi | # FBG-Based Monitoring of Geohazards: Current Status and Trends
## 4 Current Status of Applying FBG Systems for Geohazard Monitoring
### Debris Flow Monitoring
The conventional instruments for debris flow monitoring include photocells, geophones, seismometers, and wire, ultrasonic, laser and radar sensors [73]. The key problem here is how to detect the initiation of the debris flow. Monitoring and warning of debris flows using FBG inclinometers and column-nets was reported by [PERSON] et al. [49]. The inclinometer can perform real-time measurement of borehole displacements, as seen in Figure 9. The FBG column-net, with FBG sensors glued on the bottom of its steel pipes, was installed in the debris flow direction and functioned by detecting the strain at the bottom, which was affected by impact forces when debris flows pass by. In Weijiagou Valley, southwest China, the FBG sensors were used to detect the initiation of debris flow process. Once the impact force rises up to a certain value, the column-net will break. The researchers, however, did not propose such a threshold that would give suitable warning of a debris flow. The interpretation and correlation of measured data is still an unsolved problem.
[PERSON] et al. pointed out that FBG-based ground vibration and noise monitoring can be an effective approach for detecting the occurrence of landslide and debris flows [64]. The Mach-Zehnder and Sagnac hybrid interferometer were developed to establish the sensing system. The benefit of this approach is that the FBG signals can be transmitted to longer distance without affecting the sensitivity than conventional systems such as geophones. They conducted a field trial in a debris flow site in
Figure 9: Borehole displacements at the lower part of the debris flow site [49].
Taiwan. The fiber optic vibration sensing system is found to be effective for sensing ground vibration between 10 and 200 Hz frequency range. | Sensors |
37 | 10.48550/arXiv.2204.12824 | b593fdc8-7e04-44a7-9374-162b46794ec4 | https://arxiv.org/pdf/2204.12824 | Fahad S. Alqurashi; Abderrahmen Trichili; Nasir Saeed; Boon S. Ooi; Mohamed-Slim Alouini | Maritime Communications: A Survey on Enabling Technologies, Opportunities, and Challenges | -8.78125 | b593fdc8-7e04-44a7-9374-162b46794ec4.pdf | 2,022 | arxiv_2_b593fdc8-7e04-44a7-9374-162b46794ec4_43 | arxiv | # Maritime Communications: A Survey on Enabling Technologies, Opportunities, and Challenges
## VI Conclusions
This article provides a state-of-the-art survey on maritime communications. We first provided an overview of maritime communication technologies based on radio bands and the optical spectrum. Different channel models for radio and optical wireless maritime links are studied. We also categorized the channel models depending on radio link communication scenarios and the weather conditions in free-space optics. We further covered different aspects of maritime networks, including modulation and coding schemes, radio resource management, coverage and capacity, and energy efficiency. Moreover, we presented major use cases of IoT-related maritime networks. Compared to terrestrial communication, MCNs still lack high-speed links. Marine communication has been, most of the time, limited to the exchange of navigational information and critical data. Maritime communication for civil use can be subject to security bridges. Bringing broadband connectivity to deep seas is another open challenge requiring further efforts. We finally discussed exciting research problems, including incorporating visible light and THz spectra in on-board applications. We stressed on leveraging the power of machine learning algorithms for maritime communication. Establishing reliable inter-medium communication is another area of focus. Boosting the role of machine learning in maritime communication and inter-medium communications. We believe this article provides valuable insights for maritime communications researchers in academia and industry and contributes to UN sustainable development goal 14 (\"To conserve and sustainably use the oceans, seas and marine resources for sustainable development\"). | null |
38 | 10.1186/s13677-025-00762-9 | 8dcf3adb-e865-4a03-a7f7-758675fe42ac | https://journalofcloudcomputing.springeropen.com/counter/pdf/10.1186/s13677-025-00762-9.pdf | Hafiz Gulfam Ahmad Umar; Iqra Yasmeen; Muhammad Aoun; Tehseen Mazhar; Muhammad Amir Khan; Ines Hilali Jaghdam; Habib Hamam | Energy-efficient deep learning-based intrusion detection system for edge computing: a novel DNN-KDQ model | -10 | 8dcf3adb-e865-4a03-a7f7-758675fe42ac.md | 2,025 | springer_4_8dcf3adb-e865-4a03-a7f7-758675fe42ac_5.md | springer | ## Introduction
### Literature review
#### Current research
IDS play a crucial role in securing IoT networks against cyber threats. Various approaches have been proposed to enhance the performance, accuracy, and efficiency of IDS solutions. This section reviews and discusses the diverse literature on ML-based IDS, highlighting their strengths and limitations. [PERSON] and [PERSON] [14] developed an IDS leveraging federated learning to enhance anomaly detection and data privacy in IoT devices. Both supervised and unsupervised DL models were trained, demonstrating improved performance compared to non-federated learning methods. However, the approach lacks an evaluation of anomaly patterns across different devices, limiting its generalizability. [PERSON] et al. [15] proposed a distributed IDS architecture utilizing flow-based anomaly detection optimized for resource consumption in IoT environments. The approach employed deep neural networks to detect malicious traffic and demonstrated a strong performance with 0.2 false positives and 0.1 false negatives. However, challenges remain in feature selection for anomaly detection due to device variability. [PERSON] et al. [16] introduced a hybrid LDA-LR-based IDS for edge computing, achieving an accuracy of 96.56% and a precision of 95.78%. The approach efficiently detected various attack types but lacked scalability analysis concerning the increasing number of IoT devices. Additionally, privacy issues in IoT data transfer remain an open research challenge. [PERSON] et al. [17] integrated an IDS with a Security Information and Event Management (SIEM) system to provide real-time cyberattack detection. The system effectively identified DoS attacks, but high CPU and RAM usage in Elasticsearch indicated the need for optimization to reduce resource consumption. [PERSON] et al. [18] introduced a robust multi-stage progressive autoencoder for anomaly detection in hyperspectral images. This approach demonstrated superior detection capabilities compared to state-of-the-art methods. However, anomalies remain challenging to detect due to low occurrence probabilities, and feature extraction techniques require further improvements for better detection accuracy. [PERSON] et al. [19] proposed an optimized IDS for Industrial IoT (IIoT) networks using Deep Transfer Learning (DTL) and bootstrap aggregation ensemble techniques. The model achieved 100% accuracy across 14 cyberattack classes. However, the study did not address real-time intrusion detection challenges and lacked extensive testing on diverse IIoT environments. The Internet of Things (IoT) is a relatively recent domain in information technology that enables any object to communicate with any other device via a network. In recent years, cloud computing, big data, industrial wireless networks (IWNs), and the Industrial Internet of Things (IIoT) have significantly advanced. For IIoT to function, it requires a dependable and efficient data collection system, akin to a spanning tree. Early spanning tree algorithms disregarded failure and mobility entirely [20]. By replacing \"Things\" with \"Drones,\" the Internet of Drones (IoD) surpasses the Internet of Things (IoT) while retaining the unique characteristics of both ideas. IoD technologies have garnered a lot of attention lately because of their many useful applications, but convincing people that they are safe is still difficult. In the Internet of Devices (IoD), intrusion detection systems (IDSs) find it difficult to adjust to the constantly changing network architecture. Finding the ideal balance between detection speed and accuracy is one of the biggest obstacles. The use of radial basis function neural networks (RBNNs) to improve performance is illustrated in this paper [21]. The Internet of Drones (IoD) maintains the particular qualities distinguishing the two concepts by substituting \"Drones\" for \"Things,\" therefore replacing the Internet of Things (IoT). IoD technologies have drawn much interest recently because of their many significant and diverse applications. | Journal of Cloud Computing |
5,000,006 | 10.1155/2016/2635124 | c438c611-4fa8-42f2-966c-8d9bcdf5656e | https://onlinelibrary.wiley.com/doi/pdfdirect/10.1155/2016/2635124 | Johannes Jordan; Elli Angelopoulou; Andreas Maier | A Novel Framework for Interactive Visualization and Analysis of Hyperspectral Image Data | -10.914063 | c438c611-4fa8-42f2-966c-8d9bcdf5656e.md | 2,016 | chunk_c438c611-4fa8-42f2-966c-8d9bcdf5656e_16.md | wiley | # A Novel Framework for Interactive Visualization and Analysis of Hyperspectral Image Data
## 4 Segmentation and Labeling
An interactive interface to the multispectral data is no replacement for automatic processing. In fact, the two approaches together form a powerful combination. Within our framework, it is easy to interpret and assess the results of algorithms used in automated analysis. These results can be a good starting point for further interactive analysis. Gerbil is equipped with two powerful methods that segment the data either according to spectral characteristics on a global level or based on topological relation and local similarity. In the latter case we bring supervised segmentation to the multispectral domain especially for the purpose of interactive inspection. | Journal of Electrical and Computer Engineering |
39 | 10.3390/app112210994 | def71901-7d71-4ff8-b5eb-91be10aa6df3 | https://www.mdpi.com/2076-3417/11/22/10994/pdf | Jiangfei Lou; Dan Wang; Jiugang Yuan; Xuerong Fan | Relationship between Anti-Wrinkle Property of Cotton Fabrics and Crosslinking Properties of Glycosyl Polyaldehydes and Polyuronic Acids Finishing Agents: A Molecular Simulation Study | -9.992188 | def71901-7d71-4ff8-b5eb-91be10aa6df3.md | 2,021 | chunk_def71901-7d71-4ff8-b5eb-91be10aa6df3_11.md | mdpi | ## 3 Results and Discussion
### Anti-Wrinkle Performance with Glycosl Formaldehyde-Free Finishing Agents
Before discussing the molecular structure and properties of glycosyll formaldehyde-free finishing agents, it was necessary to analyze the anti-wrinkle property of cotton fabrics with those finishing agents. The structure of glycosyll polyaldehydes finishing agents (OFr, OSu, OTr, ORa, OSt) and the glycosyll polyuronic acid finishing agents (openSu, openTr, openRa, openSt) were shown in Figure 3. As monosaccharides will only produce one carboxyl group after carboxylation, a single carboxyl group cannot be cross-linked with hydroxyl groups according to the esterification cross-linking mechanism of polycarboxylic acid finishing agents; therefore, monosaccharides did not undergo a two-step oxidation reaction. Of these finished cotton fabrics with these glycosyll finishing agents and non-finishing cotton fabrics, the BTCA, GA, and DMDHEU finished fabrics were set as the control samples, and the wrinkle recovery angle (WRA), whiteness index (WI), and tensile strength (\(TS\)) of the finished fabric were measured, and the results are shown in Figure 4.
From Figure 4, the anti-wrinkle performance of fabrics finished with glucose and the OFr finishing agent failed to reach the standard range of cotton fabric anti-wrinkle performance (WRA \(\geq\) 250\({}^{\circ}\)) [1; 3]. The anti-wrinkle performance of the fabrics finished with disaccharides, trisaccharides, and tetrasaccharides glycosyll anti-wrinkle finishing agents reached the standard range. After glucose finishing, the WRA of the treated fabric was increased slightly, because glucose was a reducing monosaccharide, and the aldehyde group of glucose was cross-linked with the hydroxyl group of cellulose under the high temperature, which improved the WRA of the finished fabric.
After finishing with sauger-based polyaldehyde finishing agents OSu, OTr, ORa, and OSt, the WRA of the fabric was significantly improved, and the ORs finished fabric had the highest WRA of 249\({}^{\circ}\). The Ofr-, Osu-, Otr-, Ora-, Ost-finished fabrics had large differences in TS and WI, the WRA of the OFr-finished fabric was the best and the WRA of the OSt-finished fabric was the worst. It may be because OFr has a low aldehyde content, while OSt has a higher aldehyde group content than OFr. The aldehyde group fully reacted with the cellulose hydroxyl group during the high-temperature curing, leading the strength and whiteness of the fabric to be significantly reduced.
Compared with the fabrics finished with glyco-polyaldehyde finishing agents, the WRAs of the fabrics finished with openSu, openTr, openRa, and openSt were higher; the WRA of the openTr-finished fabric reached 262\({}^{\circ}\). This was because the molecular structure of openTr was relatively symmetrical, and the molecular volume was small. The openSu-, openTr-, openRa-, and openSt-finished fabrics have good TS, which were above 68%, and the WI of the finished fabrics was around 70. | Applied Sciences |
40 | 10.3390/app14062594 | 90fa928f-b0d6-432f-99fa-b4786bf9b27c | https://www.mdpi.com/2076-3417/14/6/2594/pdf | Mohd Tajularif Ibrahim; Nur Afiqah Hashim; Nasrul Anuar Abd Razak; Noor Azuan Abu Osman; Hossein Gholizadeh; Suryani Dyah Astuti | Techniques for Measuring the Fluctuation of Residual Lower Limb Volume in Clinical Practices: A Systematic Review of the Past Four Decades | -10.140625 | 90fa928f-b0d6-432f-99fa-b4786bf9b27c.md | 2,024 | chunk_90fa928f-b0d6-432f-99fa-b4786bf9b27c_20.md | mdpi | ## 4 Discussion
### Techniques for Measuring the Changes in the Residual Lower Limb
#### 4.1.1 Water Immersion
The water immersion measuring technique is one of the most frequently used techniques for determining the residual limb volume. Most publications used this technique as their 'gold standard' in performing validity tests [28; 29; 31; 32]. In this technique, the amount of water displaced from a tank is measured and calculated as the residual limb volume (Archimedes' principle). The measurement is conducted by asking the subject to lower their residual limb into the tank until it reaches a specific marker [21; 26; 28; 29; 30; 31; 32], or water is pumped into the tank to immerse the residual limb [27]. During the measurement sessions, the knee should be in the position of the same degree of flexion/extension (knee flexed about 25\({}^{\circ}\)) throughout all the sessions, and the same marking placement for the residual limb immersion level should take place [21]. The temperature and atmospheric pressure of the water also need to be considered since they also affect the results [30]. This measuring technique is quite sensitive to the subject's movement as the subject needs to keep their residual limb at a constant position during the measurement sessions. Even one movement of the subject will produce inaccurate results [4]. Furthermore, since this is a contact measuring technique, it can distort the shape of the residual limb during the measurement sessions owing to the hydrostatic or buoyancy effects. Later, [PERSON] et al. [37] made some improvements to this method by introducing a hydrostatic weighing technique that uses the apparent weight of water with residual limb immersion (i.e., there is no difference by measuring the amount of displaced water) as the residual limb volume. | Applied Sciences |
41 | 10.3390/app14135379 | 820d8d33-e402-483a-a6cb-d337093160a6 | https://www.mdpi.com/2076-3417/14/13/5379/pdf | Yaoyu Zhong; Mingjin Xu; Wenjun Kuang; Fubin Wan; Zhifan Lin; Yansong Fan; Qingqing Hu; Fufang Xu | Research on Subsurface Damage Measurement of Fused Silica in Ultra-Precision Grinding Based on Laser Damage Performance | -10.625 | 820d8d33-e402-483a-a6cb-d337093160a6.md | 2,024 | chunk_820d8d33-e402-483a-a6cb-d337093160a6_11.md | mdpi | ## References
* [PERSON] et al. (2022) [PERSON]; [PERSON]; [PERSON] [PERSON]; [PERSON]; [PERSON]; [PERSON]; [PERSON]; [PERSON]; [PERSON]; [PERSON]; et al. High fluence laser damage precursors and their mitigation in fused silica. _Opt. Express_**2012**, _22_, 5839-5851. [CrossRef] [PubMed]
* [PERSON] and [PERSON] (2022) [PERSON]; [PERSON]; [PERSON] Etching behavior of ground fused silica and light enhancement modulated by surface/subsurface cracks. _Int. J. Appl. Glass Sci._**2022**, _13_, 664-675. [CrossRef]
* [PERSON] et al. (2006) [PERSON]; [PERSON]; [PERSON]; [PERSON] [PERSON]; [PERSON] [PERSON]; [PERSON]; [PERSON]; [PERSON] Sub-surface mechanical damage distributions during grinding of fused silica. _J. Non-Cryst. Solids_**2006**, _352_, 5601-5617. [CrossRef]
* [PERSON] et al. (2004) [PERSON]; [PERSON] [PERSON]; [PERSON]; [PERSON]; [PERSON]; [PERSON]; [PERSON]; [PERSON]; [PERSON]; [PERSON] [PERSON]; et al. NIF optical materials and fabrication technologies: An overview. In Proceedings of the SPIE--The International Society for Optical Engineering, San Diego, CA, USA, 15-19 March 2004. * [PERSON] et al. (2020) [PERSON]; [PERSON]; [PERSON]; [PERSON]; [PERSON]; [PERSON]; [PERSON] Advances in shape controllable and property controllable manufacturing technology for ultraviolet fused silica components with high precision and few defects. _High Power Laser Part. Beams_**2020**, _32_, 032002. * [PERSON] et al. (2011) [PERSON]; [PERSON]; [PERSON]; [PERSON]; [PERSON]; [PERSON]; [PERSON]; [PERSON]; [PERSON]; [PERSON]; et al. Morphology and distribution of subsurface damage in optical fused silica parts: Bound-abrasive grinding. _Appl. Surf. Sci._**2011**, _257_, 2066-2073. [CrossRef]
* [PERSON] et al. (2023) [PERSON]; [PERSON]; [PERSON]; [PERSON]; [PERSON]; [PERSON]; [PERSON] A review of subsurface damage detection methods for optical components. _AIP Adv._**2023**, _13_, 060702. [CrossRef]
* [PERSON] and [PERSON] (2005) [PERSON]; [PERSON]; [PERSON] Subsurface damage in some single crystalline optical materials. _Appl. Opt._**2005**, _44_, 2241-2249. [CrossRef] [PubMed]
* [PERSON] et al. (1999) [PERSON]; [PERSON]; [PERSON]; [PERSON] Noncontact estimate of grinding-induced subsurface damage. In Proceedings of the Optical Manufacturing and Testing III, Denver, CO, USA, 20-23 July 1999. * [PERSON] (2008) [PERSON] Study on the Detection and Control Techniques of Subsurface Damage in Optical Fabrication. Doctoral Dissertation, National University of Defense Technology, Changsha, China, 2008. * [PERSON] (1987) [PERSON]; [PERSON] Optical glass fabrication technology 2: Relationship between surface roughness and subsurface damage. _Appl. Opt._**1987**, _26_, 4677-4680. [CrossRef] [PubMed]
* [PERSON] et al. (2008) [PERSON]; [PERSON]; [PERSON] Relationship between subsurface damage and surface roughness of optical materials in grinding and lapping processes. _J. Mater. Process. Technol._**2008**, _205_, 34-41. [CrossRef]
* [PERSON] et al. (2016) [PERSON]; [PERSON]; [PERSON]; [PERSON] Evaluation of grinding-induced subsurface damage in optical glass BK7. _J. Mater. Process. Technol._**2016**, _229_, 785-794. [CrossRef]
* [PERSON] et al. (2018) [PERSON]; [PERSON]; [PERSON]; [PERSON]; [PERSON] Effect of grinding parameters on surface roughness and subsurface damage and their evaluation in fused silica. _Opt. Express_**2018**, _26_, 4638-4655. [CrossRef] [PubMed]
* [PERSON] et al. (2017) [PERSON]; [PERSON]; [PERSON] Theoretical model of brittle material removal fraction related tosurface roughness and subsurface damage depth of optical glass during precision grinding. _Precis. Eng._**2017**, _49_, 421-427. [CrossRef]
* [PERSON] et al. (2021) [PERSON]; [PERSON]; [PERSON]; [PERSON]; [PERSON]; [PERSON]; [PERSON] Models of grinding-induced surface and subsurface damages in fused silica considering strain rate and micro shape/geometry of abrasive. _Ceram. Int._**2021**, _47_, 24924-24941. [CrossRef]
* [PERSON] et al. (2019) [PERSON]; [PERSON]; [PERSON]; [PERSON]; [PERSON] Surface Integrity of Quartz Glass Induced by Ultra-precision Grinding. | Applied Sciences |
42 | 10.13164/re.2015.0956 | e2b732ec-da68-44d9-a4f1-5746bd2af049 | https://www.radioeng.cz/fulltexts/2015/15_04_0956_0961.pdf | B. Dimitrijevic; V. Krstic; B. Nikolic | A Novel Diversity Receiver Structure for Severe Fading and Frequency Offset Conditions | -9.640625 | e2b732ec-da68-44d9-a4f1-5746bd2af049.md | 2,015 | chunk_e2b732ec-da68-44d9-a4f1-5746bd2af049_5.md | radioeng_cz | # A Novel Diversity Receiver Structure
## 2 System Model
Substituting (12) in (14), the ECC algorithm for estimation of combining coefficients in diversity combiner is obtained and it is described by the relation:
\[r_{i}(t)=s_{i}(t)+n_{i}(t),\,\,\,i=0,1,...,N_{a}-1\,, \tag{1}\]4
Results obtained using this algorithm will be compared with the ones obtained using CMA1 and CMA2 algorithms [19]. In the case of using CMA1 algorithm, \(V(k)\) can be written as [19]:
\[ \begin{split}& V_{i}(k)=V_{i}(k-1)+\\ &\mu_{\
u_{1}}\bigg{(}\frac{1}{\left|Z(k-1)\right|}-1\bigg{)}Z(k- 1)X_{i}^{*}(k-1)\end{split} \tag{16}\]
and in the case of CMA2 algorithm, the weight of the \(i\)-th branch is described by [19]:
\[ \begin{split}& V_{i}(k)=V_{i}(k-1)+\\ &\mu_{\
u_{2}}\left(1-\left|Z(k-1)\right|^{2}\right)Z(k-1)X_{i}^ {*}(k-1)\end{split} \tag{17}\]
where \(\mu_{\eta 1}\) and \(\mu_{\eta 2}\) are the adaptation factors. Considering good performance of the receiver described in [16] in the presence of carrier frequency offset, the recursive filter with remodulation is used for the detection process in the receiver that is proposed here. Since the influence of the recursive filter length on the error probability is negligible [16], we propose to use the transversal filter of unitary length in this block, due to its simplicity. In that case the signal \(\tilde{Z}(k)\) may be written as:
\[\hat{Z}(k)=A\,\tilde{Z}(k)+(1-A)\,W_{\mbox{\tiny U}}\left(k\right)\hat{Z}(k-1) \tag{18}\]
where \(A\) denotes the introduced constant parameter (\(A\leq 1\)). The value \((1-A)\) defines a part of the output signal that is returned to the input. We get \(\tilde{Z}(k)\) after the remodulation:
\[\tilde{Z}(k)=R_{\mbox{\tiny H}}\left(k\right)Z(k) \tag{19}\]
where \(R_{\mbox{\tiny H}}(k)\) is the remodulation weight. The adjustment of the weights \(W_{\mbox{\tiny U}}(k)\) is performed by the normalized LMS algorithm [20], [21]:
\[W_{\mbox{\tiny U}}(k+1)=W_{\mbox{\tiny U}}(k)+\frac{\mu_{\mbox{\tiny U}}E(k) \hat{Z}^{*}(k-1)}{\left|\hat{Z}(k)\right|^{2}} \tag{20}\]
where \(\mu_{\mbox{\tiny U}}\) is the adaptation factor. The error signal is obtained as:
\[E(k)=R_{\mbox{\tiny H}}\left(k\right)Z(k)-Y(k)=\tilde{Z}(k)-Y(k) \tag{21}\]
Figure 1: Block diagram of the proposed receiver. where
\[Y(k)=W_{U}(k)\,\hat{Z}(k-1)\,. \tag{22}\]
The detected symbol is obtained by the following minimization:
\[\hat{d}(k)=\operatorname*{arg\,min}_{r\in[0,\ldots,M-1]}|\exp\biggl{[}j\,\frac{ 2\pi r}{M}\biggr{]}Z(k)-Y(k)\,|^{2}\,. \tag{23}\]
The remodulation weight is:
\[R_{nr}(k)=\exp\biggl{[}j\,\frac{2\pi\hat{d}\left(k\right)}{M}\biggr{]}. \tag{24}\] | Radioengineering |
43 | 10.48550/arXiv.1312.1450 | c9ab7676-4885-425c-9587-195498031175 | https://arxiv.org/pdf/1312.1450 | Liang Liu; Rui Zhang; Kee-Chaing Chua | Multi-Antenna Wireless Powered Communication with Energy Beamforming | -10.851563 | c9ab7676-4885-425c-9587-195498031175.pdf | 2,013 | arxiv_3_c9ab7676-4885-425c-9587-195498031175_24 | arxiv | # Multi-Antenna Wireless Powered Communication with Energy Beamforming
## References
* [1] [PERSON] and [PERSON], \"MIMO broadcasting for simultaneous wireless information and power transfer,\" _IEEE Trans. Wireless Commun._, vol. 12, no. 5, pp. 1989-2001, May 2013.
* [2] [PERSON], \"Transporting information and energy simultaneously,\" in _Proc. IEEE Int. Symp. Inf. Theory (ISIT)_, pp. 1612-1616, July 2008.
* [3] [PERSON] and [PERSON], \"[PERSON] meets [PERSON]: wireless information and power transfer,\" in _Proc. IEEE Int. Symp. Inf. Theory (ISIT)_, pp. 2363-2367, June 2010.
* [4] [PERSON], [PERSON], and [PERSON], \"Wireless information and power transfer: architecture design and rate-energy tradeoff,\" _IEEE Trans. Commun._, vol. 61, no. 11, pp. 4757-4767, Nov. 2013.
* [5] [PERSON], [PERSON], and [PERSON], \"Wireless information transfer with opportunistic energy harvesting,\" _IEEE Trans. Wireless Commun._, vol. 12, no. 1, pp. 288-300, Jan. 2013.
* [6] [PERSON] and [PERSON], \"Throughput maximization in wireless powered communication networks,\" _IEEE Trans. Wireless Commun._, vol. 13, no. 1, pp. 418-428, Jan. 2014.
* [7] [PERSON] and [PERSON], \"Solution of the multiuser downlink beamforming problem with individual SINR constraints,\" _IEEE Trans. Veh. Technol._, vol. 53, no. 1, pp. 18-28, Jan. 2004.
* [8] [PERSON] and [PERSON], _Matrix Analysis_, Cambridge University Press, 1985.
* [9] [PERSON], _Non-Negative Matrices and Markov Chains_: Springer, 1981.
* [10] [PERSON], [PERSON], and [PERSON], \"Joint beamforming and power control for multiple access channels in cognitive radio networks,\" _IEEE J. Sel. Areas Commun._, vol. 26, no. 1, pp. 38-51, Jan. 2008.
* [11] [PERSON], [PERSON], and [PERSON], \"Joint beamforming and power control in Coordinated multicell: max-min duality, effective network and large system transition,\" _IEEE Trans. on Wireless Commun._, vol. 12, no. 6, pp. 2730-2742, Jun. 2013.
* [12] [PERSON], [PERSON], and [PERSON], \"Transmit beamforming and power control for cellular wireless systems,\" _IEEE J. Select. Areas Commun._, vol. 16, no. 8, pp. 1437-1449, Oct. 1998.
* [13] [PERSON], [PERSON], and [PERSON], \"Duality, achievable rates, and sum-rate capacity of Gaussian MIMO broadcast channels,\" _IEEE Trans. Inf. Theory_, vol. 49, no. 10, pp. 2658-2668, Oct. 2003.
* [14] [PERSON], \"Uplink-downlink duality via minimax duality,\" _IEEE Trans. Inf. Theory_, vol. 52, no. 2, pp. 361-374, Feb. 2006.
* [15] [PERSON], [PERSON], [PERSON], [PERSON], and [PERSON], \"On the Gaussian MIMO BC-MAC duality with multiple transmit covariance constraints,\" _IEEE Trans. Inf. Theory_, vol. 58, no. 34, pp. 2064-2079, Apr. 2012.
* [16] [PERSON] and [PERSON], _Convex Optimization_, Cambridge University, 2004.
* [17] [PERSON] and [PERSON], CVX: _Matlab software for disciplined convex programming, version 1.21_, [[http://cvxr.com/cvx/](http://cvxr.com/cvx/)]([http://cvxr.com/cvx/](http://cvxr.com/cvx/)) Apr. 2011.
* [18] [PERSON] and [PERSON], \"Optimal downlink power assignment for smart antenna systems,\" in _Proc. IEEE Int. Conf. Acoust. Speech and Signal Proc._, Seattle, Washington, May 1998, pp. 3337-3340.
* [19] [PERSON], [PERSON], and [PERSON], \"Multiuser MISO beamforming for simultaneous wireless information and power transfer,\" in _Proc. IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP)_, 2013. | null |
5,000,007 | 10.1049/cmu2.12170 | 5cae876f-85ab-4f9c-b2b9-db6789f724d5 | https://ietresearch.onlinelibrary.wiley.com/doi/pdfdirect/10.1049/cmu2.12170 | Yifei Liu; Lang Feng; Liang Wu; Zaichen Zhang; Jian Dang; Bingcheng Zhu; Lei Wang | Joint optimization based satellite handover strategy for low earth orbit satellite networks | -7.8125 | 5cae876f-85ab-4f9c-b2b9-db6789f724d5.md | 2,021 | chunk_5cae876f-85ab-4f9c-b2b9-db6789f724d5_10.md | wiley | # Joint optimization based satellite handover strategy for low earth orbit satellite networks
## 3 Joint optimization model
### Multi-satellite connection and power allocation
#### 3.4.2 Adaptive power allocation algorithm in multi-satellite connection
Now we consider multi-satellite connection for \(U_{j}\) whose quality of service still does not reach the preset threshold when switching from \(S_{q,1}\) to \(S_{q}\). The time nodes when \(U_{j}\) enters and leaves the service area of \(S_{q,1}\) are defined as \(t_{k1}\) and \(t_{k2}\), respectively. The time nodes when \(U_{j}\) enters and leaves the service area of \(S_{q}\) are denoted by \(t_{k3}\) and \(t_{k4}\), respectively. First, search the corresponding time nodes in **time_U_S**. If \(t_{k2}<t_{k4}\), the handover occurs at the time \(t_{k2}\). Since the satellite \(S_{q,1}\) has passed, \(U_{j}\) can only determine whether it has entered the service area of \(S_{q+1}\) and whether \(S_{q+1}\) has any remaining power. If \(t_{k2}>t_{k4}\), the handover occurs at the time \(t_{k3}\). Since the \(U_{j}\) has not entered the service area of satellite \(S_{q+1}\), \(U_{j}\) can only determine whether it is still in the service area of \(S_{q,2}\) and whether there is any remaining power in \(S_{q,2}\). If \(U_{j}\) is not in the service area of \(S_{q,2}\), repeat the above steps to determine whether \(U_{j}\) has left the service area of \(S_{q,3}\) and whether there is any remaining power. The forward and backward searching methods adopted in two switching cases are shown in Figure 9(a),(b), respectively. The criterion for judging whether the satellite \(S_{j}\) has remaining power is as follows. Assuming the quality of service of \(U_{j}\) does not reach the threshold at a specific moment, it wants to connect to another satellite \(S_{j}\). It can be known from the matrix **link** that, for example, at this time, \(S_{j}\) has been connected to \(U_{j}\), \(U_{j}\), \(U_{j}\) and \(U_{j}\) whose service quality all reaches the threshold value. The minimum required power \(P_{x,y}\), \(P_{y,y}\)\(P_{c,y}\), \(P_{d,j}\) and \(P_{t,j}\) are calculated according to the quality of service threshold of \(U_{j}\), \(U_{j}\), \(U_{j}\), \(U_{j}\) and \(U_{j}\) respectively. If the sum of the required power does not exceed the total power of \(S_{j}\), there is remaining power at \(S_{j}\) and the \(U_{j}\) can connect to it. The above procedure is summarized as the algorithm 2 in Table 2. Note that the proposed optimization can be done offline and does not require any real-time computation. \begin{table} | IET Communications |
44 | 10.48550/ARXIV.2507.18206 | 8aab817f-1640-41d2-addd-d6e757a46eed | https://arxiv.org/pdf/2507.18206 | Arup Kumar Sahoo; Itzik Klein | MoRPI-PINN: A Physics-Informed Framework for Mobile Robot Pure Inertial Navigation | -10.859375 | 8aab817f-1640-41d2-addd-d6e757a46eed.pdf | 2,025 | arxiv_1_8aab817f-1640-41d2-addd-d6e757a46eed_4 | arxiv | # MoRPI-PINN: A Physics-Informed Framework for Mobile Robot Pure Inertial Navigation
## I Introduction
In a recent work, [PERSON] and [PERSON] [28] employed PINN algorithm with error tracking control to estimate the speed of the leader robot. Building on the PINN advancements, this paper introduces MoRPI-PINN, a mobile robot pure inertial framework for accurate navigation. It is hypothesized that PINNs can effectively guide the model using partial physical knowledge in the form of ordinary or partial differential equations and limited data. PINNs offer a unified framework for such scenarios by embedding known physical laws (e.g., 2D-INS equations here) directly into neural network training. This is achieved through automatic differentiation enforcing the governing differential equations as soft constraints in the loss function. In inertial navigation, MoRPI-PINN models learn the system's trajectory by balancing physical consistency and data fidelity. This makes them particularly suitable for navigation tasks with limited and/or noisy sensor data. To further increase the influence of our MoRPI-PINN framework, we constrain the robot to move in a snake-like slithering motion, as this motion has already proven to yield an increased inertial signal to noise ratio allowing regression of the mobile robot's position, even in rough terrain. The contributions of this paper are:
* Development of the MoRPI-PINN framework to cope with real-world scenarios of pure inertial navigation for mobile robots operating in various scenarios. * Integration of the governing physics of 2D-INS equations of motion with sparse sensor data, relying only on a single trajectory, during the training process of the network. We demonstrate that by embedding the physical laws and constraints of 2D-INS equations of motion into the training process, MoRPI-PINN provides an accurate and robust navigation solution for a mobile robot. Using real-world experiments on a mobile robot equipped with IMU and RTK-GNSS, we present an 85% improvement over other model-based and data-driven methods. The rest of the paper can be outlined as follows: Section II presents the INS equation and various model based solutions. Section III gives our proposed approach, detailing the formulation of PINNs algorithm and model architechture. Section IV describes our experimental setup and presents comprehensive results, with analyses of the data. Finally, the conclusions are drawn in Section V. | arXiv |
5,000,008 | 10.1155/2015/184608 | 8bc979a2-252b-4c31-a5cf-dddc1bbe3304 | https://onlinelibrary.wiley.com/doi/pdfdirect/10.1155/2015/184608 | E. M. Vitucci; V. Degli-Esposti; F. Fuschini; J. S. Lu; M. Barbiroli; J. N. Wu; M. Zoli; J. J. Zhu; H. L. Bertoni | Ray Tracing RF Field Prediction: An Unforgiving Validation | -8.15625 | 8bc979a2-252b-4c31-a5cf-dddc1bbe3304.md | 2,015 | chunk_8bc979a2-252b-4c31-a5cf-dddc1bbe3304_6.md | wiley | # Ray Tracing RF Field Prediction: An Unforgiving Validation
## 2 The Considered Ray-Based Model
### Extensions to the Ray Tracing Engine
Depending on the properties of the propagation scenario, the position of antennas (especially of BSs in cellular networks), and the link distance, the dominant propagation process can occur over building rooftops (ORT propagation), and/or around buildings along the street canyons [17]. Particularly when the BS antenna is placed near or above the rooftop level, propagation takes place primarily over the buildings [18], where the radio wave undergoes multiple diffractions over the horizontal edges delimiting the roofs contours.
Differently from the buildings vertical corners, horizontal edges are not necessarily parallel to each other, and this poses a theoretical limit to the computation of the multiple-diffracted field using a 3D ray tracing approach. In fact, although the geometrical trajectories can be tracked regardless of the number \(n_{d}\) of involved diffractions [19], analytical expressions for the corresponding propagating field are available only for \(n_{d}\) up to 3 if the wedges are arbitrarily oriented [20]. Since ORT propagation may sometimes require more than 3 diffractions, especially for large link distance, for rays undergoing more than 2 diffractions the fully 3D geometrical computation is replaced with a simplified approach using a multiple-screen UTD model limited to the vertical plane, considering one/two knife-edges for each building along the radial line between the Tx and the receiver (Rx). Before applying the UTD model, the ORT profile is simplified by identifying only the dominant obstacles with the \"rubber-band\" method. As discussed in Section 3, such multi-knife-edge models seem to overestimate the attenuation (see Section 3), probably due to the ideality of the knife-edge assumption with respect to the actual shape of buildings; some correction factors are therefore added depending on the number of knife-edges, as suggested in [21, 22].
A proper combination of ORT and diffuse scattering is also introduced in the model, since it led to a significant prediction improvement in some cases, as described in the next section. Since each scattering tile behaves as a secondary source for a new, spherical wave radiated in all directions, scattering or the combination of scattering and ORT seems to be efficient ways to reach non-line-of-sight (NLOS) Rxs in deep street canyons where diffraction from the roof edge is very weak, as depicted in Figure 2.
The RT tool has been also extended to take into account the effect of terrain, through ground reflection and obstruction. In the considered simulation setup, a Digital Terrain Model (DTM) consisting of a raster file with a resolution of 10 m is used. | International Journal of Antennas and Propagation |
45 | 10.3390/app9040706 | 1dc9c170-d47b-46be-b5d4-bb4a81299bd5 | https://www.mdpi.com/2076-3417/9/4/706/pdf | Junlei Tang; Junyang Li; Hu Wang; Yingying Wang; Geng Chen | In-Situ Monitoring and Analysis of the Pitting Corrosion of Carbon Steel by Acoustic Emission | -9.554688 | 1dc9c170-d47b-46be-b5d4-bb4a81299bd5.md | 2,019 | chunk_1dc9c170-d47b-46be-b5d4-bb4a81299bd5_18.md | mdpi | # In-Situ Monitoring and Analysis of the Fitting Corrosion of Carbon Steel by Acoustic Emission
## 4 Conclusions
The pitting corrosion of Q235 carbon steel in NaHCO\({}_{3}\) + NaCl solutions was studied by in-situ monitoring of the AE technique and OCP simultaneously. The concentration of NaCl had a pronounced influence on the OCP evolution. In 500 mg/L NaCl, the OCP varied in a very narrow range, indicating a relatively stable state. However, with the increase of the NaCl concentration, the OCP dropped significantly and stabilized at a rather negative potential.
The AE monitoring results were in accordance with the results of the OCP monitoring. However, the OCP only presented thermodynamic information of the corrosion in the interface, while the AE sensor detected the breakdown of the passive film and small damage in the occluded pits under the metallic surface, and the relative events.
The AE signals of pitting corrosion on carbon steel were classified into three types by waveform parameters clustering after AE waveform processing, including pre-treatment, shape preserving interpolation, and denoising. The result indicated that the developed signal processing is a highly efficient method for the classification of AE signals and preparation of the waveform for further data analysis.
A method based on 2D pattern recognition was established for the identification of different types of corrosion in Matlab. The analysis results showed that the method can be used to distinguish between uniform corrosion and localized corrosion effectively, while it was not very effective for distinguishing different localized corrosion.
The AE technique could be applied to in-situ monitoring of the corrosion of carbon steel or stainless steel containers in different industries and for the reinforcement of steel bars for construction. However, the main obstacle to the use of AE monitoring is environmental noise, which make data processing difficult.
Conceptualization, J.T. and H.W.; methodology, J.T. and H.W.; software, J.T. and H.W.; investigation, J.T., H.W. and G.C.; data curation, J.L.; writing--original draft preparation, J.T. and Y.W.; writing--review and editing, J.T. and Y.W.; visualization, J.T. and J.L.; supervision, H.W.
This research was funded by Applied Basic Research Programs of Science and Technology Department of Sichuan Province, grant number 2017Y0044 and Project Funding to Scientific Research Innovation Team of Universities Affiliated to Sichuan Province, grant number 18 TD0012.
The authors declare no conflict of interest. | Applied Sciences |
46 | 10.3390/electronics11234055 | 7724438f-637e-4bd8-9c7b-de4c6815a128 | https://www.mdpi.com/2079-9292/11/23/4055/pdf | Yifa Li; Wei Fan; Huaqiang Gao; Fengchun Zhang | Experimental Validation and Applications of mm-Wave 8 × 8 Antenna-in-Package (AiP) Array Platform | -8.03125 | 7724438f-637e-4bd8-9c7b-de4c6815a128.md | 2,022 | chunk_7724438f-637e-4bd8-9c7b-de4c6815a128_10.md | mdpi | ## 3 Aip Array Platform Applications
### Investigating the Effectiveness of Calibration Methods on Large Phased Array
The 8 \(\times\) 8 AiP array with 64 elements is a large array. In this part, this AiP array is used as a validation platform, and the effectiveness of three typical phased array calibration methods: 'on-off', 'inverse' and 'least squares' on large phased array calibration is investigated. Some work has been carried out to investigate the calibration accuracy of these three methods on small and medium phased arrays [22; 23; 36; 37]. For the 'inverse' calibration measurement, we first activate all antenna elements of the AiP and set their phase shifter and attenuator to 0\({}^{\circ}\) and 0 dB respectively. The S-parameter \(\vec{E}_{o}\) between the AiP and the chamber probe antenna is recorded. Then the phase shifter and attenuator of
Figure 8: AiP beamforming patterns at different angles. Figure 7: AiP beamforming patterns at 0\({}^{\circ}\) without and with calibration. the \(m\)th element are set to \(-180^{\circ}\) and 0 dB in turn for \(m\in[1,64]\), while the phase shifter and attenuator of the remaining antenna elements are still set to \(0^{\circ}\) and 0 dB respectively. The S-parameter \(\vec{E_{m}}\) between the AiP and the chamber probe antenna is also recorded. Finally, the initial excitation of the \(m\)th element \(\vec{e_{m}}=\frac{\vec{E_{m}}-\vec{E_{m}}}{2}\) is obtained. In order to provide reference values for the calibration results, the phases of the first to fourth elements are set to \(22.5^{\circ}\), \(33.75^{\circ}\), \(45^{\circ}\) and \(56.25^{\circ}\), and the attenuation of the first to fourth elements is set to \(-2\) dB, \(-3\) dB, \(-4\) dB and \(-5\) dB, respectively. The remaining elements are set to \(0^{\circ}\) and 0 dB. The 'inverse' method is applied to calibrate the AiP with and without additional excitation assigned to the first four elements, and the difference between the two calibration results is shown in Figure 9. The amplitude differences of the first four elements are \(-1.1\) dB, \(-1.7\) dB, \(0.6\) dB and \(-1.4\) dB, respectively. And the phase differences are \(26.1^{\circ}\), \(18.7^{\circ}\), \(32.8^{\circ}\) and \(33.0^{\circ}\), respectively. Combining the reference values shows that the calibration results have large errors for the intended changed elements. In addition, there are large calibration errors of amplitude for the rest unchanged elements, where the amplitude calculated value of 0 dB. It indicates that the 'inverse' method fails to effectively calibrate the AiP with a large array. The AiP array calibration excitation matrix \(A\) of 'inverse' method can be denoted as (1), which the condition number is \(k(A)=31.3\). Therefore, the condition number of the 'inverse' calibration excitation matrix is large, which makes the calibration heavily influenced by measurement noise. For this reason the 'inverse' method is not suitable for calibration on large phased arrays. \[A=\left[ \begin{array}{cccc}1&1&\dots&1\\ -1&1&\dots&1\\ 1&-1&\dots&1\\ \dots&\dots&\dots&\dots\\ 1&1&\dots&-1\end{array} \right]_{9\times 8} \tag{1}\]
Figure 9: 'Inverse’ calibration effectiveness measurement results. Further, the calibration method is replaced by the 'on-off' and 'least squares' methods, taking repeated calibration measurements as described above. The AiP array calibration excitation matrix of 'least squares' method is based on the Hadamard matrix [23; 36], and the condition number is 1, so the phased array calibration is less affected by the measurement noise. The obtained measurement results are shown in Figure 10 and Figure 11, respectively. The errors between the calibration results of these two methods for the four intended changed elements and the corresponding reference values are shown in Table 1. The measurement results show that both methods are able to perform effective measurements on the AiP array, though there are some errors as expected as well. | Electronics |
47 | 10.3390/app14062452 | db100c31-14a0-4396-be11-ff7091abacf8 | https://www.mdpi.com/2076-3417/14/6/2452/pdf | Mariya Aleksandrova; Nikolay Kurtev; Ivailo Pandiev | Effect of MXene Nanosheet Sticking on Supercapacitor Device Performance | -8.234375 | db100c31-14a0-4396-be11-ff7091abacf8.md | 2,024 | chunk_db100c31-14a0-4396-be11-ff7091abacf8_6.md | mdpi | # Effect of MXene Nanosheet Sticking on Supercapacitor Device Performance
## 2 MXenes as Supercapacitor Electrodes
### Overview of MXenes
Subsequent research led to the synthesis of various MXenes, including Ti\({}_{2}\)CT\({}_{X}\), Zr\({}_{3}\)C\({}_{2}\)T\({}_{X}\), Nb\({}_{2}\)CT\({}_{X}\), Nb\({}_{4}\)C\({}_{3}\)T\({}_{X}\), V\({}_{2}\)CT\({}_{X}\), Ti\({}_{3}\)CNT\({}_{X}\), Mo\({}_{2}\)CT\({}_{X}\), Ti\({}_{4}\)N\({}_{3}\)T\({}_{X}\), Mo\({}_{4}\)VC\({}_{4}\)T\({}_{X}\), Mo\({}_{2}\)ScC\({}_{2}\)T\({}_{X}\), (Ti\({}_{0.5}\), Nb\({}_{0.5}\))\({}_{2}\)CT\({}_{X}\), (Nb\({}_{0.8}\), Ti\({}_{0.2}\))\({}_{4}\)C\({}_{3}\)T\({}_{X}\), and (Nb\({}_{0.8}\), Zr\({}_{0.2}\))\({}_{4}\)C\({}_{3}\)T\({}_{X}\), among others. While over 100 MAX phases have been identified, only more than 30 MXenes have been synthesized and experimentally assessed, fueling ongoing research into the discovery of novel MXenes and their distinctive properties [18]. MXenes have become distinguished for their unique blend of metallic (transition metal atoms) and ceramic (carbon/nitrogen atoms) properties. This distinctive combination has resulted in exceptional attributes: high metallic conductivity reaching up to 6000-8000 S cm\({}^{-1}\), excellent thermal conductivity, remarkable mechanical stability, exceptional optical properties, impressive electric and magnetic properties, outstanding hydrophilicity, customizable surface functional groups, and intercalation capabilities [19; 20]. A 3D VN/MXene composite structure was developed for aqueous zinc-ion batteries, enhancing storage capacity and service life [4]. By encapsulating VN microspheres in MXene nanosheets, the electrode demonstrated high reversible capacity, superior rate performance, and exceptional stability over 2200 cycles. This design strategy offers insights into advanced cathode materials for zinc-ion batteries. These exceptional properties have positioned MXenes as a versatile material suitable for a wide range of practical applications. The tiny lateral size, atomic-scale thickness, and remarkable hydrophilic ability facilitate the creation of flexible, thin MXene layers through processes like vacuum filtration and spraying technology [21]. MXenes' outstanding conductivity enables higher power density compared to metal-oxide semiconductors, ultimately contributing to fast device charging [22]. The tunable surface chemistry, a result of the HF acid etchant, makes MXenes an ideal material for efficient composite electrodes [23]. The swift electron transfer between MXene layers renders them an excellent substrate for catalysis. The characteristics and uses of MXenes demonstrate diversity based on elemental compositions, stoichiometry, synthesis techniques, interlayer distances, layer thickness, and lateral flake dimensions [24]. Traditionally, energy storage devices tend to be rigid and inflexible, limiting their application in emerging domains like wearable electronics, smart garments, and flexible displays [25]. This need has spurred research into devices that provide not just high energy storage capability but also the necessary flexibility and resilience for such uses. In this context, MXene materials have demonstrated significant potential in wearable electrochemical energy storage elements, thanks to their distinctive layered arrangement, abundance of surface terminations, superior electrical conductivity, hydrophilicity, and large specific surface area [26]. The myriad applications of MXenes in electrochemical energy storage devices inspire a comprehensive overview of their synthesis, microstructure, properties, and prospects, factors affecting their electrochemical behavior in supercapacitors, charge storage mechanisms, and state-of-the-art supercapacitors based on MXene composites. | Applied Sciences |
SatCom Chunk Collection
A large-scale dataset of 1,900,085 text chunks was constructed from satellite communication (SatCom) research papers. These chunks are extracted from the SatCom corpus.
Each chunk is enriched with structured metadata (e.g., publication details and authorship information), domain relevance scores, and precomputed vector embeddings using Qwen3-Embedding-4B to enable efficient semantic retrieval and downstream RAG-based generation.
Dataset Structure
The dataset is organized into two subsets:
chunks (default)
Text content and metadata. Lightweight and previewable in the HuggingFace dataset viewer.
| Column | Type | Description |
|---|---|---|
id |
int64 |
Unique identifier for each chunk |
content |
string |
The text content of the chunk |
title |
string |
Title of the source paper |
authors |
string |
Authors of the source paper |
doi |
string |
Digital Object Identifier of the source paper |
url |
string |
URL to the source paper |
journal |
string |
Journal or venue where the paper was published |
publisher |
string |
Publisher of the source paper |
year |
float64 |
Publication year (ranges from 1929 to 2026) |
score |
float64 |
Domain relevance score produced by UltraRM, a reward model. Represents how closely the chunk's content relates to the satellite communication domain. Higher values (closer to 0) indicate stronger relevance |
file_id |
string |
Internal file identifier |
original_file_name |
string |
Original filename of the source document |
chunk_name |
string |
Name/identifier of the chunk within its source document |
embeddings
Precomputed embedding vectors, joinable with chunks via the id column.
| Column | Type | Description |
|---|---|---|
id |
int64 |
Unique identifier (matches chunks.id) |
vector |
fixed_size_list<float32>[2560] |
2560-dimensional embedding vector |
Usage
from datasets import load_dataset
# Load text and metadata (default)
chunks = load_dataset("esa-sceva/satcom-chunk-collection")
# Load embeddings
embeddings = load_dataset("esa-sceva/satcom-chunk-collection", "embeddings")
# Merge on id when you need both text and vectors
import pandas as pd
chunks_df = chunks["train"].to_pandas()
embeddings_df = embeddings["train"].to_pandas()
merged = chunks_df.merge(embeddings_df, on="id")
Chunking Strategy
Documents were split using hierarchical chunking, which preserves the logical structure of research papers (sections, subsections, paragraphs) rather than splitting at arbitrary token boundaries. This ensures that each chunk captures a coherent unit of information. The maximum token length per chunk is 1,048 tokens.
Dataset Statistics
- Total rows: 1,900,085
- Max chunk length: 1,048 tokens
- Chunking method: Hierarchical
- Embedding dimensions: 2,560 (float32)
- Publication years: 1929–2026
- Source: Satellite communication research literature
Score Details
The score column is a domain relevance score computed using UltraRM-13b, a reward model developed by OpenBMB. Each chunk was scored based on how closely its content aligns with the satellite communication domain. Scores are negative floats where values closer to zero indicate higher relevance to SatCom.
- Downloads last month
- 73