[go: up one dir, main page]

Follow
Julio B. Clempner
Julio B. Clempner
National Polytechnic Institute
Verified email at clempner.name - Homepage
Title
Cited by
Cited by
Year
Fast terminal sliding-mode control with an integral filter applied to a van der pol oscillator
CU Solis, JB Clempner, AS Poznyak
IEEE Transactions on Industrial Electronics 64 (7), 5622-5628, 2017
862017
Simple computing of the customer lifetime value: A fixed local-optimal policy approach
JB Clempner, AS Poznyak
Journal of Systems Science and Systems Engineering 23 (4), 439-459, 2014
642014
Computing the Stackelberg/Nash equilibria using the extraproximal method: Convergence analysis and implementation details for Markov chains games
KK Trejo, JB Clempner, AS Poznyak
International Journal of Applied Mathematics and Computer Science 25 (2 …, 2015
612015
A Stackelberg security game with random strategies based on the extraproximal theoretic approach
KK Trejo, JB Clempner, AS Poznyak
Engineering Applications of Artificial Intelligence 37, 145-153, 2015
582015
Continuous-time reinforcement learning approach for portfolio management with time penalization
M García-Galicia, AA Carsteanu, JB Clempner
Expert Systems with Applications 129, 27-36, 2019
492019
Convergence method, properties and computational complexity for Lyapunov games
JB Clempner, AS Poznyak
International Journal of Applied Mathematics and Computer Science 21 (2 …, 2011
462011
Verifying soundness of business processes: A decision process Petri nets approach
J Clempner
Expert Systems with Applications 41 (11), 5030-5040, 2014
442014
Stackelberg security games: Computing the shortest-path equilibrium
JB Clempner, AS Poznyak
Expert Systems with Applications 42 (8), 3967-3979, 2015
432015
Modeling the multi-traffic signal-control synchronization: A Markov chains game theory approach
JB Clempner, AS Poznyak
Engineering Applications of Artificial Intelligence 43, 147-156, 2015
422015
Adapting strategies to dynamic environments in controllable Stackelberg security games
KK Trejo, JB Clempner, AS Poznyak
2016 IEEE 55th Conference on Decision and Control (CDC), 5484-5489, 2016
362016
Computing the strong Lp− Nash equilibrium for Markov chains games: Convergence and uniqueness
KK Trejo, JB Clempner, AS Poznyak
Applied Mathematical Modelling 41, 399-418, 2017
352017
A Tikhonov regularized penalty function approach for solving polylinear programming problems
JB Clempner, AS Poznyak
Journal of Computational and Applied Mathematics 328, 267-286, 2018
332018
Modeling multileader–follower noncooperative Stackelberg games
CU Solis, JB Clempner, AS Poznyak
Cybernetics and Systems 47 (8), 650-673, 2016
322016
Necessary and sufficient Karush–Kuhn–Tucker conditions for multiobjective Markov chains optimality
JB Clempner
Automatica 71, 135-142, 2016
322016
A priori-knowledge/actor-critic reinforcement learning architecture for computing the mean–variance customer portfolio: the case of bank marketing campaigns
EM Sánchez, JB Clempner, AS Poznyak
Engineering Applications of Artificial Intelligence 46, 82-92, 2015
322015
A Tikhonov regularization parameter approach for solving Lagrange constrained optimization problems
JB Clempner, AS Poznyak
Engineering Optimization 50 (11), 1996-2012, 2018
312018
Traffic-signal control reinforcement learning approach for continuous-time Markov games
R Aragon-Gómez, JB Clempner
Engineering Applications of Artificial Intelligence 89, 103415, 2020
302020
Adapting attackers and defenders patrolling strategies: A reinforcement learning approach for Stackelberg security games
KK Trejo, JB Clempner, AS Poznyak
Journal of Computer and System Sciences 95, 35-54, 2018
302018
Planeación estratégica de tecnología de información en entornos dinámicos e inciertos
JC Kerik, AG Tornés
Revista digital universitaria 2 (4), 9, 2001
292001
A Stackelberg security Markov game based on partial information for strategic decision making against unexpected attacks
SE Albarran, JB Clempner
Engineering Applications of Artificial Intelligence 81, 408-419, 2019
282019
The system can't perform the operation now. Try again later.
Articles 1–20